

We have a separate non virtualised system running Veeam for backup and replication between the ESXi hosts.Nice setup. They have local disks to them and been working great for several years. I currently manage 2 vmware ESXi hosts in our office. I do recommend people doing any kind of DAS array deployment follow your model and use a Synchronous Active/Active controller system (HUS, VSP, 3PAR etc) and not something cheap and active passive (some like Nimble will outright not support this config for this reason). You can do direct attached FC with many FC arrays so you don't need switches.The "Pet Rock" config while less popular these days does work pretty well at a reasonable cost. I'd rather have 1 Aircraft Carrier than 2 small rubber boats. Sometimes that comes from redundancy, sometimes that comes from buying higher quality products with better MTBF's and 4 hour support agreements. R3DPAND4 wrote:Don't add a SPOF to your infrastructure unnecessarily.

StarWind can work with VMware or Hyper-V the setup is just a bit different.AetherStore isn't certified (Also, I"m not even sure if it speaks a supported protocol) and the performance would be hilariously bad. Take a look at StarWind vSAN or AetherStore. Is there a solution for doing a hyper-converged setup within a budget - but sticking to commercial products? I also don't have the budget for vmware VSAN. In Hyper-V with clustering I can use the Windows Storage Spaces to do the software RAID, but I don't want to go down the route of Hyper-V unless I have to.

They have no RAID capabilities - So would simply present the disks in this enclosure to the host system - no RAID protection. My question though is - The SAS JBOD I am looking at are simply JBOD's. It's very unlikely we will ever need more than 2 hosts, so a dual expander) JBOD would be perfect for this job, using SAS disks of course. This means only buying a JBOD enclosure and a HBA for each of the hosts. I would therefore like to go with a shared SAS array. you'd ideally need 10Gbps infrastructure, a reasonable quality SAN for the storage, then there is all the of the administration on top of it. I have looked into iSCSI and Fibre Channel, but without going with a non-commercial product such as FreeNAS then the cost is prohibitive - e.g. I would like to go with an upgrade with shared storage. This has worked great so far, but it's not expandable and doesn't give a particularly high level of availability. We have a separate non virtualised system running Veeam for backup and replication between the ESXi hosts.
