Scenario:
- I have two separate vCenter Servers.
- Each has a vSAN cluster.
- Each cluster is connected to additional separate storage.
- Each cluster is on a different IP range, different naming convention and different domain.
- Hosts in each cluster are identical in spec, reside in the same rack and are physically connected to the same e/w & n/w switches.
- vCenter & ESXi hosts are on 6.0u3 with latest patches with embedded PSC.
Goal:
- Create one cluster with all hosts & one vSAN datastore, one vDS.
- Decommission one vCenter server.
- Decommission one of the external storage devices.
Steps:
- Export the vDS config on vCenter B and import to vCenter A.
- Decommission vSAN cluster on cluster B.
- Move hosts from vCenter B, cluster B to vCenter A, temporary cluster.
- Re-ip & rename hosts to match hosts in cluster A.
- Add hosts into vSAN cluster A.
- Merging Distributed Virtual Switch.
2. Decommission vSAN cluster on cluster B.
An issue I noticed was one cluster (the primary) had 4 disks in each disk group but the secondary (that was being moved) only had 3 disks in the disk group, but had 4 physical disks in the host, when looking at the host from the VSAN Disk Management view I could see that 4th disk as ineligible.
I found no reason for this. I checked to confirm the hypervisor wasn’t running on an SSD of the same size, thankfully not, I was able to add the disk as a datastore to each host (formatting it as VMFS as had no choice) and then saw the 3 of 4 disks in use.
Next Step (or so I thought) was to turn off VSAN,
So Lesson one, don’t turn off VSAN first, if you do you cannot manage the disk groups at all, I though it would then allow me to delete them now, therefore I had to configure VSAN again (and it picked up the 4th disk this time per group automatically :-).
Note, when I disabled and renabled vSAN all files were still there (files I don’t care about but none the less they are still there (for info).
So Lets do it right.
First turn off the VSAN Health check.
Delete the disk groups individually from each host by selecting the Disk Group and the “Remove the disk group” icon. (note I had nothing of use on the VSAN datastore). This is a live environment and not a lab, just to note.
I chose no data migration as nothing needed from VSAN cluster.
Disk group will disappear once completed.
Once all disk groups are deleted from their hosts you will see 0 of x disks in use.
This notification also appeared on the cluster.
Next Turn off/Disable VSAN
Enable HA.
When I was completed I still had the message below, which is pretty strange seeing as VSAN is disabled. The only option I have is to go to disk management yet that is not accessible due to VSAN being disabled.
To clear these alerts I had to connect to each host via ssh and run the esxcli vsan cluster leave command.
Run esxcli vsan cluster get to get confirmation that hosts are are not part of VSAN cluster.