Disk upgrade
Disk upgrade
We have recently upgraded our 3par with new 1.2TB disk where the existing disks are in 800GB. Now in our 3PAR, we have both 800GB and 1.2TB under same CPG. i can see data started utilizing this new disks because of tunesys. HP didn't created a new CPG while allocating the disks.
1. Is this good approach to keep both different disks in the same CPG (All other SPECS are same like device RPM).
2. If the new disks needs to add to the new CPG, how can we do that. Do we need to remove those disks from existing CPG and add to new CPG ? Since this is a prod environment, Is there any kind of data loss because of this ?
3. We have Dynamic optimization feature available.
1. Is this good approach to keep both different disks in the same CPG (All other SPECS are same like device RPM).
2. If the new disks needs to add to the new CPG, how can we do that. Do we need to remove those disks from existing CPG and add to new CPG ? Since this is a prod environment, Is there any kind of data loss because of this ?
3. We have Dynamic optimization feature available.
Re: Disk upgrade
I'm guessing the old drives are 900GB (which no longer is available).
According to best practice it is generally okay to mix different sized disk in a CPG as long as the smallest drive is equal or more than 50% of the capacity of the biggest drive. The only issue is that the IOPS per GB is lower on bigger drives, so if your performance utilization on the old drives was like 90%, you will get a problem down the line when then 1.2TB drives are more than 900GB full. I've not seen it, but I'm pretty sure that this corner case exists on some array somewhere in the world .
So if I were you I would just keep calm and keep only the existing CPG.
According to best practice it is generally okay to mix different sized disk in a CPG as long as the smallest drive is equal or more than 50% of the capacity of the biggest drive. The only issue is that the IOPS per GB is lower on bigger drives, so if your performance utilization on the old drives was like 90%, you will get a problem down the line when then 1.2TB drives are more than 900GB full. I've not seen it, but I'm pretty sure that this corner case exists on some array somewhere in the world .
So if I were you I would just keep calm and keep only the existing CPG.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: Disk upgrade
Thanks for your reply.
As you mentioned, once the disk utilization becomes 90% on 900GB disk, still we have more than 250GB/1.2TB disks.
In this particular moment, how its going to behave. 900GB disks still grow gradually till 100% and then it start utilizing 1.2TB balance disk space ? I hope this scenario will create performance issues.
Still you recommend to put it in same CPG.
As you mentioned, once the disk utilization becomes 90% on 900GB disk, still we have more than 250GB/1.2TB disks.
In this particular moment, how its going to behave. 900GB disks still grow gradually till 100% and then it start utilizing 1.2TB balance disk space ? I hope this scenario will create performance issues.
Still you recommend to put it in same CPG.
Re: Disk upgrade
I was making an example of 90% performance utilization while you seem to be talking about capacity.
When you manage a storage array you have 2 drivers for doing an upgrade. You either need more performance or you need more capacity.
If your driver for upgrading was capacity then it most likely will not become any issue. It will most definitely not be an issue in the near future. The corner cases here is if you were reaching 75% performance utilization of your drives as you were reaching 100% of your capacity utilization.
If your driver for upgrading was performance you should have bought smaller and/or faster drives and keeping everything in one CPG might become an issue.
Doing extreme simplification:
10k SAS drives assume a performance of 150 random IOPS that will give you an average of 0.16666667 IOPS per GB for 900GB drives and 0.125 IOPS per GB on 1.2TB. So a 1.2TB drive is slower "per GB" than a 900GB with the same specs. Okay?
If the backend "IO density" of your system was 0.15 IOPS per GB you would never have any problems with 900GB drives and when there are no issues you don't go looking for them. However if your backend "IO density" is the same for your data growth you will see that the 1.2TB drives doesn't deliver enough "performance per capacity" to meet your requirement. So when the system grows and your 900GB drives are full and your 1.2TB drives are at 900GB .... everything is good. But going forward with further growth your 1.2TB aren't able to keep up. First of all the 1.2TB drives are the only drives with additional capacity so all new writes will hit these drives. That will create an unbalance in the load on the drives where the 1.2TB needs to deliver more IOPS than your 900GB drives. In addtion to that, they will have more data stored on them which will produce more IOPS.....
So as long as the 900GB drives doesn't fill up (and you will stay below 900GB capacity usage on the 1.2TB) you will have the exact same "per drive" performance on both new and old drives. And as long as you weren't close to any backend performance issues prior to your upgrade, you shouldn't see any difference between the 900GB and 1.2TB drives.
And if you were seeing performance issues on the 900GB before upgrade, then you can do whatever you want to do (multiple CPGs, DO, etc) but it will not change the fact that you will run out of performance before you run out of capacity if the IO density is the same for the future growth. I would actually say that keeping everything in one would be a benefit as you would have all the performance available in one CPG where you have all your volumes, rather than having X amount of IOPS available for one set of VVs and Y amount of IOPS available for another set of VVs. The drawback is that when you reach the maximum performance of the drives, you will impact all VVs at the same time but it will take longer time to get there compared to multiple CPGs.
When you manage a storage array you have 2 drivers for doing an upgrade. You either need more performance or you need more capacity.
If your driver for upgrading was capacity then it most likely will not become any issue. It will most definitely not be an issue in the near future. The corner cases here is if you were reaching 75% performance utilization of your drives as you were reaching 100% of your capacity utilization.
If your driver for upgrading was performance you should have bought smaller and/or faster drives and keeping everything in one CPG might become an issue.
Doing extreme simplification:
10k SAS drives assume a performance of 150 random IOPS that will give you an average of 0.16666667 IOPS per GB for 900GB drives and 0.125 IOPS per GB on 1.2TB. So a 1.2TB drive is slower "per GB" than a 900GB with the same specs. Okay?
If the backend "IO density" of your system was 0.15 IOPS per GB you would never have any problems with 900GB drives and when there are no issues you don't go looking for them. However if your backend "IO density" is the same for your data growth you will see that the 1.2TB drives doesn't deliver enough "performance per capacity" to meet your requirement. So when the system grows and your 900GB drives are full and your 1.2TB drives are at 900GB .... everything is good. But going forward with further growth your 1.2TB aren't able to keep up. First of all the 1.2TB drives are the only drives with additional capacity so all new writes will hit these drives. That will create an unbalance in the load on the drives where the 1.2TB needs to deliver more IOPS than your 900GB drives. In addtion to that, they will have more data stored on them which will produce more IOPS.....
So as long as the 900GB drives doesn't fill up (and you will stay below 900GB capacity usage on the 1.2TB) you will have the exact same "per drive" performance on both new and old drives. And as long as you weren't close to any backend performance issues prior to your upgrade, you shouldn't see any difference between the 900GB and 1.2TB drives.
And if you were seeing performance issues on the 900GB before upgrade, then you can do whatever you want to do (multiple CPGs, DO, etc) but it will not change the fact that you will run out of performance before you run out of capacity if the IO density is the same for the future growth. I would actually say that keeping everything in one would be a benefit as you would have all the performance available in one CPG where you have all your volumes, rather than having X amount of IOPS available for one set of VVs and Y amount of IOPS available for another set of VVs. The drawback is that when you reach the maximum performance of the drives, you will impact all VVs at the same time but it will take longer time to get there compared to multiple CPGs.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
-
- Posts: 5
- Joined: Sat May 21, 2016 7:55 am
Re: Disk upgrade
Hi all,
I'm sorry to hijack this thread but I'm interested by the same problem, except that in my case the disks have a capacity difference of more than 50%:
480GB MLC vs 1,92TB cMLC
Mammagutt your explanation is very exhaustive for the case but what approach would you suggest in the event that:
- a cpg has already exist for 480GB disks with devtype specifier == SSD and RPM == 100
- the new 1.92TB disks also have RPM == 100
- you want to keep those new disks segregated from the old ones at CPG level
My first idea is to:
1) create a new CPG for old disk using a pattern -devid == SSD model
2) TuneVV all VVs to new CPG
3) remove old CPG
4) create new CPG for new disks using again pattern -devid and start using it for new VVs
It's this approach correct ?
Thanks in advance.
Regards,
Antonio
I'm sorry to hijack this thread but I'm interested by the same problem, except that in my case the disks have a capacity difference of more than 50%:
480GB MLC vs 1,92TB cMLC
Mammagutt your explanation is very exhaustive for the case but what approach would you suggest in the event that:
- a cpg has already exist for 480GB disks with devtype specifier == SSD and RPM == 100
- the new 1.92TB disks also have RPM == 100
- you want to keep those new disks segregated from the old ones at CPG level
My first idea is to:
1) create a new CPG for old disk using a pattern -devid == SSD model
2) TuneVV all VVs to new CPG
3) remove old CPG
4) create new CPG for new disks using again pattern -devid and start using it for new VVs
It's this approach correct ?
Thanks in advance.
Regards,
Antonio
Re: Disk upgrade
I would use tc_lt and tc_gt parameter as a replacement drive might have another devid, while size will generally not.
For SSDs I might take a different approach. Depending on the number of 480 and 1.92 drives and system/number of nodes it might not really matter. With spinning media your backend (disks) will (almost) always be the limiting factor, while with SSDs the nodes will very quickly be the limiting factor.
If you have a non-technical reason for wanting to seperate the disks, I would just change the existing CPG, create a new one, tune the volumes you want to move and just run tunesys to clean up whatever chunklets that has ended up in the wrong CPG.
Your way isn’t wrong, but it will be more manual tasks/time consuming.
For SSDs I might take a different approach. Depending on the number of 480 and 1.92 drives and system/number of nodes it might not really matter. With spinning media your backend (disks) will (almost) always be the limiting factor, while with SSDs the nodes will very quickly be the limiting factor.
If you have a non-technical reason for wanting to seperate the disks, I would just change the existing CPG, create a new one, tune the volumes you want to move and just run tunesys to clean up whatever chunklets that has ended up in the wrong CPG.
Your way isn’t wrong, but it will be more manual tasks/time consuming.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
-
- Posts: 5
- Joined: Sat May 21, 2016 7:55 am
Re: Disk upgrade
>I would use tc_lt and tc_gt parameter as a replacement drive might have another devid, while size will generally not.
good point and suggestion..I've totally overlooked the potential different model issue with replacement parts!
>For SSDs I might take a different approach. Depending on the number of 480 and 1.92 drives and system/number of nodes it might not really matter. With spinning media your backend (disks) will (almost) always be the limiting factor, while with SSDs the nodes will very quickly be the limiting factor.
I got the point..and You are of course correct considering that we have 2x 2node/4cage 8200, each one has those SSD:
16 old 480 MLC + and 8 new 1,82 cMLC
>If you have a non-technical reason for wanting to seperate the disks, I would just change the existing CPG, create a new one, tune the volumes you want to move and just run tunesys to clean up whatever chunklets that has ended up in the wrong CPG.
well the only "tecnical reason" it's the supposed better write endurance of MLC vs cMLC:
we are slowly but steadly moving away from ppers+rcopy and refactoring all HA/DR things into higher application stacks (combination of SIOS Datekeeper,LBs, SQL DAG,ecc) to have a more granular control and visibility and, last but not least, a more streamlined servicing (and emergence) procedures/workflows;
My actual consideration were to use a CPG r5_3+1 with the MLC ssd for "write intensive" but "ephemeral" object like TempDBs,SIOS or Windows "logs/journal volumes", and new cMLC for data with a CPG R6_62..
since those volumes are write oriented but require small total space and because MLC it's supposed to have more write endurance I was thinking that having a separated CPG as a good idea;
however thinking about it a bit more, considering that the MLC disks are 2 years old and that the new cMLC nand/controllers have definitely better specifications, I can probably consider them equivalent and do not worry about having two CPGs =)
Thanks again for your usefull answers.
Regards,
Antonio
good point and suggestion..I've totally overlooked the potential different model issue with replacement parts!
>For SSDs I might take a different approach. Depending on the number of 480 and 1.92 drives and system/number of nodes it might not really matter. With spinning media your backend (disks) will (almost) always be the limiting factor, while with SSDs the nodes will very quickly be the limiting factor.
I got the point..and You are of course correct considering that we have 2x 2node/4cage 8200, each one has those SSD:
16 old 480 MLC + and 8 new 1,82 cMLC
>If you have a non-technical reason for wanting to seperate the disks, I would just change the existing CPG, create a new one, tune the volumes you want to move and just run tunesys to clean up whatever chunklets that has ended up in the wrong CPG.
well the only "tecnical reason" it's the supposed better write endurance of MLC vs cMLC:
we are slowly but steadly moving away from ppers+rcopy and refactoring all HA/DR things into higher application stacks (combination of SIOS Datekeeper,LBs, SQL DAG,ecc) to have a more granular control and visibility and, last but not least, a more streamlined servicing (and emergence) procedures/workflows;
My actual consideration were to use a CPG r5_3+1 with the MLC ssd for "write intensive" but "ephemeral" object like TempDBs,SIOS or Windows "logs/journal volumes", and new cMLC for data with a CPG R6_62..
since those volumes are write oriented but require small total space and because MLC it's supposed to have more write endurance I was thinking that having a separated CPG as a good idea;
however thinking about it a bit more, considering that the MLC disks are 2 years old and that the new cMLC nand/controllers have definitely better specifications, I can probably consider them equivalent and do not worry about having two CPGs =)
Thanks again for your usefull answers.
Regards,
Antonio
Re: Disk upgrade
If I'm not mistaken the SSD warranty includes wear-out, so unless you're planning to keep the array for more than 7 years, wear-out shouldn't even be a discussion
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Disk upgrade
we have 2x 2node/4cage 8200, each one has those SSD:
16 old 480 MLC + and 8 new 1,82 cMLC
You have two separate 8200s or a single 8200 shelf of disk (with 2 nodes) + 4 cages?
When you say each one of those has 16 old and 8 new, does that mean you have 5 sets of 24 SSDs? 1 tray for the node shelf, plus 4 additional dumb shelves?
I am trying to visualize the number of SAS loops and how your shelves are attached, and more importantly how many of the NEW SSD's there are per SAS loop. If you have enough new SSDs per SAS loop that write performance will be bottlenecked by the loop and not the PDs, then it might be ok to leave them all in one CPG.
After rebalancing, and all your PDs grow to the point that the 480gb drives are full, your new writes will be concentrated on the new drives. On a 4 node system, 32 drives was the sweet spot when the SAS loops limited performance, and adding more SSD was for capacity only. Making an educated guess that 16 drives is enough to push a 2 node system at full speed. So if your baseline is writing to 120 SSD drives (bottlenecked by the SAS controllers), and reach a capacity point where you are only writing to 48 drives (also bottlenecked by SAS controllers)... I think you are safe.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
-
- Posts: 5
- Joined: Sat May 21, 2016 7:55 am
Re: Disk upgrade
Hello Richard,
thanks for the addition note and I'm sorry for being too terse about the systems setups...
actually there are 2 separate 8200s, each one configured as follows:
- 1 node shelf plus 3 additional cages
- 4x480GB MLC disks plus 2x1,92TB cMLC for each cage (node shelf included) for a total of 16+8 = 32 SSD disks
- 48x 1,2TB 10K FC disks
Regards,
Antonio
thanks for the addition note and I'm sorry for being too terse about the systems setups...
actually there are 2 separate 8200s, each one configured as follows:
- 1 node shelf plus 3 additional cages
- 4x480GB MLC disks plus 2x1,92TB cMLC for each cage (node shelf included) for a total of 16+8 = 32 SSD disks
- 48x 1,2TB 10K FC disks
Regards,
Antonio