godfather007 wrote:
6+2 and 6+2 and 6+2 and 6+2 (36) can act parrallel as 4 raid0 striped raid 6 sets, more ideal for writing. But writing is done on the SSD cpg anyway. Actually there are 24 simultaneous reading drives here.
14+2 and 14+2 (32) is indeed on the same physical disks which is not ideal. 28 reading drives, actually reads faster but writes slower as config above.
I think you made a nice point here. I'm not going to play with it.
Martijn
Yes, 6+2 (x4) will align with the physical number of disks so you write full sets so for sequential write there might be a difference if that's all you do. But when you start doing random reads or overwrite existing blocks it all changes as you will read/write from the physical drives which contain the data. On the 3PAR it isn't as simple as saying that disk 0 to 7 is always used in the same 6+2 set because it randomizes the chunklets when wide-striping to prevent hot spots.
I can fully understand the wish to increase set size to improve capacity efficiency and you can simply do that by changing the current set size on the existing CPG and run tunesys. Just monitor the progress so you don't run out of space while it is converting. If it starts getting full, just cancel the tunesys job, run compact cpg and restart tunesys. Just remember that bigger set size = bigger rebuild after disk failure.