Editing CPG RAID characteristics
Editing CPG RAID characteristics
Am I right in thinking that, when the RAID characteristics for a CPG are altered, all new LD creation will use the new RAID characteristics and that, when tunesys is performed, all existing LDs will be recreated using the new RAID characteristics? Thanks in advance.
Re: Editing CPG RAID characteristics
Yes.
Just watch out for failed tuneld tasks when running tunesys.
Showld (-d) will show the characteristics for each LD so you can verify when its done.
Just watch out for failed tuneld tasks when running tunesys.
Showld (-d) will show the characteristics for each LD so you can verify when its done.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
- Richard Siemers
- Site Admin
- Posts: 1331
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Editing CPG RAID characteristics
Precisely.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: Editing CPG RAID characteristics
Thanks for the assistance, folks. Much appreciated.
-
- Posts: 36
- Joined: Sat Jan 07, 2017 3:50 am
Re: Editing CPG RAID characteristics
Hi, any more clarification around failed tuneld tasks.
We had a lot after adding extra capacity to our 3PAR and running tunesys.
In our case tunesys was only started to balace data over the system, not to tune some params like raid5 to raid6. Maybe nothong to do with it, but instead of having beter backend response we saw some customer set alerts about response time (+ 9ms for SSD)
We had a lot after adding extra capacity to our 3PAR and running tunesys.
In our case tunesys was only started to balace data over the system, not to tune some params like raid5 to raid6. Maybe nothong to do with it, but instead of having beter backend response we saw some customer set alerts about response time (+ 9ms for SSD)
Re: Editing CPG RAID characteristics
You saw increased latency after running tunesys or during tunesys? And also what is the rule?
If after, I would check the load if that has changed.
If during, which 3PAR OS version are you running? And what types of volumes you got? Tunesys will add some load to the system as it is redistributing the data across the nodes, ports, cages and disks. I'm not sure to what granularity it works, but I would assume some latency spikes if you are trying to access blocks which are being tuned at the same time. I don't recall which patches but I seem to remember some tunesys and latency related fixes in some 3.2.2 patches for MU2, 3 and 4 maybe 1 1/2 to 2 years back. So if you're lagging behind on patches it might make it worse.
If after, I would check the load if that has changed.
If during, which 3PAR OS version are you running? And what types of volumes you got? Tunesys will add some load to the system as it is redistributing the data across the nodes, ports, cages and disks. I'm not sure to what granularity it works, but I would assume some latency spikes if you are trying to access blocks which are being tuned at the same time. I don't recall which patches but I seem to remember some tunesys and latency related fixes in some 3.2.2 patches for MU2, 3 and 4 maybe 1 1/2 to 2 years back. So if you're lagging behind on patches it might make it worse.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
-
- Posts: 36
- Joined: Sat Jan 07, 2017 3:50 am
Re: Editing CPG RAID characteristics
We saw increased latency for the first time after adding additional SSD (#32) to the existing #64 and after tunesys was run (with tuneld errors seen). Running 3.3.1 EMU1 with the latest patches. First we saw > 3ms response on the backend . Nothing to worry about (HPE answer). Then we saw >5ms response on the backend and later on for more than one 5 minutes intervals a > 10 ms response on the SSD backend. Of course these are spikes seen , not all day and not every day.
We were suspicies about the tuneld errors seen with not good HPE explanation. One should not expect poor backend performance when adding extra hardware (SSD's). At this moment a HPE specialist is looking at the performance metrics to have an explanation.
We were suspicies about the tuneld errors seen with not good HPE explanation. One should not expect poor backend performance when adding extra hardware (SSD's). At this moment a HPE specialist is looking at the performance metrics to have an explanation.
Re: Editing CPG RAID characteristics
Without a lot of info here, I'm thinking that this sounds like some node issues. Would be nice to hear what they find out.
Just out of curiousity.. SSDs and 3.3.1... Are you running dedupe and if so, what version?
Just out of curiousity.. SSDs and 3.3.1... Are you running dedupe and if so, what version?
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
-
- Posts: 36
- Joined: Sat Jan 07, 2017 3:50 am
Re: Editing CPG RAID characteristics
we did implement dedupe (old version) for all vmware data (linux and windows) in production. The profit was 1,1/1 so we did un-dedupe again (best practices). Maybe in the future we will dedupe the VDI environment with the new dedupe version.
We also compressed everything in development and acceptance environment but went back also after some issues. Waiting now for 3.3.1 MU2 and written in blood that this is the OS without compaction issues.
We also compressed everything in development and acceptance environment but went back also after some issues. Waiting now for 3.3.1 MU2 and written in blood that this is the OS without compaction issues.
Re: Editing CPG RAID characteristics
Just checking, you are aware of the checkvv command with the -dedup_dryrun , -compr_dryrun (3.3.1 and later) and dedup_compr_dryrun(3.3.1 and later) commands to check for possible savings prior to converting?
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.