Optimal way to present VV's to ESX? / CPG selection

Post Reply
spencer.ryan
Posts: 35
Joined: Tue Feb 11, 2014 11:33 am

Optimal way to present VV's to ESX? / CPG selection

Post by spencer.ryan »

We have a nice new 7400-2n with three tiers, pretty normal CPGs, R5, R5, and R6

The general idea is we are going to let AO do it's thing and move data around as it sees fit (it's what we paid for right?)


As far as creating the VVols however do you just make all of the storage have a single CPG (the middle tier?) and then let AO move data around?


Once AO starts moving data then the VVol doesn't really exist on any single CPG (or may not, more accurately).

For bulk data movement into the system from our old storage should I just stick it all in FC (or nearline?) and have AO move up data as needed?

I've got about 10TB of FC and 174TB of NL.


My Equallogic brain is trying to wrap my head around it.

Thanks!

Spencer
afidel
Posts: 216
Joined: Tue May 07, 2013 1:45 pm

Re: Optimal way to present VV's to ESX? / CPG selection

Post by afidel »

We designed our CPG's to land everything on FC and let AO move things around as it will, however your system doesn't seem to have enough FC to do that unless your existing array is quite small. Perhaps you'll want to have nonprod and prod CPG's and have Prod land on FC and nonprod land on NL and then only move the hottest blocks up? We also don't let nonprod vvols tier up to tier 0 except for a CPG we built specifically for database servers.
spencer.ryan
Posts: 35
Joined: Tue Feb 11, 2014 11:33 am

Re: Optimal way to present VV's to ESX? / CPG selection

Post by spencer.ryan »

You're right, I don't have enough FC, however what if I just over-provision the FC (export all LUNs from the FC CPG) and only move in a few TB at a time, and let AO move the data for a day or two and import more data?



If I create vvols in the NL CPG then writes to those will always go to NL first right?

What about making some NL vvols, bulk moving in data, and then switching the vvol to a FC CPG, without actually moving the data with DO.

That would allow AO to move data around, and it would also make writes to those go into FC, right?


We may lock a few specific things to a specific class with a new CPG, but optimistically I'd like everything to default to tier1 and let the system figure it out from there.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Optimal way to present VV's to ESX? / CPG selection

Post by Richard Siemers »

Correct, all NEW writes will go into the CPG the VV is created in.

If you started in SATA, and let AO promote data up to fill your SSD and fill your FC, your read speeds would be great, as long as your reading data older than your AO cycles, but your writes would suffer as well as reads to new data, until AO could come around and run again.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
spencer.ryan
Posts: 35
Joined: Tue Feb 11, 2014 11:33 am

Re: Optimal way to present VV's to ESX? / CPG selection

Post by spencer.ryan »

Thanks Richard.

I've seen your AO config elsewhere on the board but we're significantly smaller than you (112 disks total)


Here's what we ended up doing:

A majority of the data got loaded into nearline.
"Mid-level" performance things, such as SQL and Exchange got loaded into FC
The only thing we put into SSD was a small VDI LUN.

Set AO for all 3 tiers, and balanced, and let it rip.

We're gonna let it run for a while and see how it performs. The FC alone can run circles around the SAS we had in our Equallogics.

We may end up changing AO to Performance, I want tier0 and 1 as full as possible, we paid for it right?

At the moment we only have 3 CPGs (SSD R5, FC R5 and NL R6). We don't have any applications we want to lock into a tier, so everything is just part of the AO policy.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Optimal way to present VV's to ESX? / CPG selection

Post by Richard Siemers »

Thanks for sharing.

Where do you stand on thin provisioning? I believe the Best practice is Thin on the 3PAR with the VVs zero_detect enabled, and Thick Eager Zero in ESX. When eager zeroing occurs, it should reclaim space back to the 3PAR.

Since all your storage starts in the middle CPG and AO does the rest, do you bother with any VMware storage DRS? I can imagine how DRS and AO may counteract each other on a bad day. I am just learning about VASA, and how it works between 3PAR and ESX DRS, so I'm no expert yet, by I believe its a key piece to ensuring DRS doesn't move a vmdk from one datastore to another datastore which share the same CPG/storage tier on the back end.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
spencer.ryan
Posts: 35
Joined: Tue Feb 11, 2014 11:33 am

Re: Optimal way to present VV's to ESX? / CPG selection

Post by spencer.ryan »

We're pretty indifferent about the zeroing method in VMWare. We thick provision everything in VMWare and thin everything on the back end.


We have SDRS enabled for both pools in VMWare (One for FC and one for NL) however automation is disabled. What we do use SDRS for is placement and movement of VMs to accommodate new machines/disks.

With SDRS on if I try and add a 1TB VMDK it might say "Okay, stick it on NL-0, it has the space" or it might say "Hey, we don't have a 1TB chunk available, but we can move these 5 VMs around and get you that 1TB"

At that point you just click Accept and let it do it's thing.


After a lot of moving stuff around I've run the zero reclaim on the VMFS volumes themselves (vmkfstools -y)


Realistically though what I'd like to see if for VMWare to understand how a virtual SAN works, and instead of having to manage individual LUNs and VMFS datastores the storage can say "Hey, here is 200TB of Nearline, and another 50TB of FC, do with it what you will" and we wouldn't have to worry about 2TB datastores and shuffling VMs around all the time.

ESXi 5.5 adds a lot of great features for bigger RDMs and VMFS volumes but they are still fundamentally broken. For example you can have VMDKs > 2TB, but you can't expand them online. Kind of defeats the whole purpose.
Post Reply