HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 6 posts ] 
Author Message
 Post subject: Unique VV in a 3par system, with different storage classes
PostPosted: Wed Apr 29, 2015 9:26 pm 

Joined: Wed Apr 29, 2015 6:59 pm
Posts: 2
Hi,

Pretty new to 3par system (only 1 7200 online), I have to setup a new 3par storage for a vmware 5.5 cluster, that will deliver resources to high end shared customers.

The system I've unpacked is a 3par 7200c with :

1 shelf with 2 controllers, 4*SSD 480G MLC + 8*FC 900G FC 10k
1 shelf 4*SSD 480G MLC + 8*FC 900G FC 10k

I would like to export a unique volume to my ESXi hosts, to make my users use a single datastore, and looking for a way to create this kind of volume, through 2 CPGs (r1_SSD & r1_FC), that will be allocated via an AO rule.

Does somebody already configured any similar system ? May my dreams come true ? :)

Thanks for your help !


Top
 Profile  
Reply with quote  
 Post subject: Re: Unique VV in a 3par system, with different storage class
PostPosted: Thu Apr 30, 2015 5:50 am 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 505
I wouldn't use raid 1 on the SSD's you'll just waste space for little benefit, raid 5 is best practice and performance will be massively better than FC can provide anyway. If You are using 480GB MLC and not cMLC drives you might want to look at using adaptive flash cache also.

You create two CPG's, one on SSD and the other on FC, create your volume in the FC CPG only NOT in the SSD CPG. Then configure an AO policy with SSD as the top tier and FC the middle and schedule that policy (once a day to start with).
Once activated the AO policy will look at the access rate of the volume and promote hot enough blocks to the SSD's, so it will automatically spread your volume between CPG's.

There's an AO whitepaper you might want to Google for that explains how AO world works and how to implement, also numerous blog posts. Same for flash cache, search for "3PAR flash advisor"


Top
 Profile  
Reply with quote  
 Post subject: Re: Unique VV in a 3par system, with different storage class
PostPosted: Thu Apr 30, 2015 6:27 am 

Joined: Wed Apr 29, 2015 6:59 pm
Posts: 2
Thanks fo the update John. SSD's are cMLC sorry.

Will this volume maximum capacity equal to ssd_r5 cpg + fc_r1 cpg ?

I'll try to implement this today.

Ju


Top
 Profile  
Reply with quote  
 Post subject: Re: Unique VV in a 3par system, with different storage class
PostPosted: Thu Apr 30, 2015 7:55 am 

Joined: Tue Feb 03, 2015 3:35 am
Posts: 18
i agree with john regarding the raid level. we use raid 5 for our SSD tier.

we use flash cache on our cMLC disk but they are the larger 1.9tb so support it.


Top
 Profile  
Reply with quote  
 Post subject: Re: Unique VV in a 3par system, with different storage class
PostPosted: Thu Apr 30, 2015 12:40 pm 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 505
The maximum VV size is 16TB assuming you are using a TPVV, if fully provisioned then 16TB or the max capacity of the FC disks after raid and sparing overheads, whichever is the smaller of the two. Remember if thin provisioning you can completely over allocate the array using multiple volumes.

However it's typically not a great idea to run anything near 100% unless you fully understand the potential implications as it leaves you no wiggle room if things don't go to plan over time. Once you have a feel for growth etc you can adjust the allocation over time.

Unless you have a workload heavily biased toward writes then it might be better to use raid 5 on the FC CPG, if it doesn't meet expectations then you have the ability to retune to raid 10 later, assuming you have capacity available.

I'd take some time to read the 3PAR concepts guide and understand how things work under the covers before committing to a configuration.


Top
 Profile  
Reply with quote  
 Post subject: Re: Unique VV in a 3par system, with different storage class
PostPosted: Thu Apr 30, 2015 1:25 pm 

Joined: Tue May 07, 2013 1:45 pm
Posts: 216
JohnMH wrote:
The maximum VV size is 16TB assuming you are using a TPVV, if fully provisioned then 16TB or the max capacity of the FC disks after raid and sparing overheads, whichever is the smaller of the two. Remember if thin provisioning you can completely over allocate the array using multiple volumes.

However it's typically not a great idea to run anything near 100% unless you fully understand the potential implications as it leaves you no wiggle room if things don't go to plan over time. Once you have a feel for growth etc you can adjust the allocation over time.

Unless you have a workload heavily biased toward writes then it might be better to use raid 5 on the FC CPG, if it doesn't meet expectations then you have the ability to retune to raid 10 later, assuming you have capacity available.

I'd take some time to read the 3PAR concepts guide and understand how things work under the covers before committing to a configuration.

Even if the workload is write heavy the 3Par ASIC and wide striping make it mostly a non-issue since RAID5-3+1 achieves 91% of the performance of RAID10 in an OLTP workload (small random reads, basically the worst case for RAID5). Even my high performance set uses RAID5, it just uses 3+1 vs 7+1 for normal stuff and it's kept off the NL tier by AO policy. The only reason to consider RAID10 in this scenario IMHO is the fact that you can get cage redundancy which RAID5 will no allow with only 2 shelves.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 6 posts ] 


Who is online

Users browsing this forum: No registered users and 136 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt