If software RAID at operating system level is mandatory for your applications, I would suggest to use single Pool with multiple volumes under it.
Try to avoid software RAID at operating system level as it could create issues with volume repair at OS level, if a disk group fails.
This would give optimal sequential write performance. The maximum number of disks that you could have in a RAID 6 disk group as per power of 2 recommendation is 10. There is no single answer to the question which is best as you need to test each configuration in your environment and decide which gives optimal performance. The advantage of single Pool with 2 disk groups is that the load will be shared across all the disks. When you have 2 Pools there will be controller level load balancing. Pool A volumes will be sending traffic through controller B host ports which creates a small latency. If controller A fails/shut down/restarted, controller B will take ownership and data traffic for volumes in both Pools will flow through controller B host ports. Similarly, the preferred path for Pool B volume data traffic is controller B host ports B1 B2 B3 B4. The preferred path for Pool A volume data traffic is controller A host ports A1 A2 A3 A4. The Pool A volumes will managed by controller A and Pool B volumes will be managed by controller B. > If I connect controller A and B will something block my access over controller A to Pool B (LUN 2)? -> in a 2 pool config. I "only" need always 3.000 Mbit/s out of one enclosure - even if one controller failed, a rebuild was started or any other single failure was/is happend. My (actually single) FC-Host has 2 FC-Ports and the 2nd FC-Port is only to controller A of the 2050 SAN enclosure connected - nothing on controller B - But I see and can access my LUN 1 (Pool A) and my LUN 2 (Pool B). I need all-time availability of my 2 FC-LUN's on all controllers.īut if I put them into POOL A and B it sounds like I can access my LUN 2 (pool B) over controller A - only if controller B failed. In case of owning controller failure other controller will take over. At a time volume will be owning by one controller only. \\Regarding volume availability on both controllers, that's not possible. I also found this post ( ) and the last message from SUBHAJIT KHANBARMAN_1 make me nervous :
“Only Pool A on Controller A/B (one Archive – Tier 2 – with 2 LUN disk)”ĭo we have a different cache usage with 2 Pools?įor me, the 2nd (single Pool) looks more like my old “linear” storage.īut the post you gave me ( ) is not enough for my case scenario. Pool B on Controller B (with only one - Archive - LUN disk)” “Pool A on Controller A (with only one - Archive - LUN disk) and The reference guide is calling it “One Pool, with 100% headroom” …ĭoes someone know of “advantage” or “disadvantage” between this config? I like more the config if booth controller take care all the time for ONE POOL … … In case one controller failed the other takes “temporally” care of the other Pool … I read already the HPE MSA Gen5 virtual storage – Reference guide, but for such a “simple” configuration I need, it is not easy to decide what is the “Best” option to choose. Now in MSA 2050 SAN, we do the same, but each RAID-6 is in a different POOL A/B. In MSA 2040 we created 2 RAID-6 disks each 6 disks (12 disk enclosure). Our software is creating a raid-0 or raid-10 over all these accessible FC-LUN-Disks. As an example, one FC-HOST can have up to 4 FC-connections the same FC-LUN-Disk. Our “Storage solution” is located on multiple FC-Hosts, all using multipath FC connections for balancing and redundancy. Now we use MSA 2050 SAN … with these Pool A/B config. HPE Blog, Austria, Germany & Switzerlandįor a long time, our company used MSA 2040 SAN Enclosures providing LUN disk to an FC-storage system.