After the LUN migration fiasco I finally have the space to move forward with VMware and continuing to convert more servers. It's even more important now b/c we are supposed to be moving our data center and the lest I have to transport the better. Also for high availability, flexibility and efficiency. A lot of buzz words right!
I've created a 1TB LUN for all my VMDK files. Mostly the guest OS's. This should be more then enough for what I need currently. I have plenty of rooms for SWAP space and backups. I've then shared this massive LUN to both ESX host inside of Navishpere (select the LUN and add to storage group - in the case you add the LUN to both ESX storage groups). That's the basics.
I'm going to take a step back and discuss the overall picture. The gist of the matter is to be able to have my servers up all the time or as little downtime as possible. VMware already allows you to reboot servers in about 1 1/2 minutes time so that already faster then a physical box rebooting. But in the cases where you can't afford to have any dropped connections what-so-ever Vmotion is the way to go. This is what's meant by high availability. To accomplish this you'll need two ESX servers and one Virtual Center server with the appropriate license to unlock Vmotion.
So I've went and installed Virtual Center server on a windows 2003 server and pointed it to my two ESX hosts. You'll need to have a central lic server that manages all the lic's on the ESX hosts and the virtual center server. This is no biggie. Go to your account page on Vmware site and convert your lic's to central file or something like that. I pretty much edited one of my lic's and added the rest to the last line. Then installed the lic manger of the Virtual center server and updated the ESX hosts to look there. I was good to go. Then you'll need to update your infrastructure client by pointing it to the Vitrual center server so that you can see both hosts. Create a new cluster and add you hosts. All your existing VM's will appear on both host if you have them.
The final set will be to create a Vmotion network which is actually called a VMkernel Vmotion. You mush have a few NIC's in both host that you can allowcate to network redundancy and Vmotion. You assign a NIC for the Vmotion network and give it a private IP address x.x.x.1 and on the other host do the same thing with x.x.x.2 At this point you must physically get a crossover cable and connect those NIC you just assigned. You'll have to go to the back of the host and start plugging until you find the NIC's LOL. Or you can just VLAN those NIC's and you should be set. I those the first method this time around.
Sounds confusing but it really isn't. Once you get started the info just flows to your brain from out of nowhere....LOL!
Once all that was out the way I created a test VM on the shared LUN I created above. I put it on the network and opened the console or VNC to it. I then started a continuous ping from inside the VM to our DNS server (since that always on) and proceeded to do a manual Vmotion. Drag and drop the VM from it's ESX host to the other ESX host and you get a window pop-up and a few questions. It's a done deal. You can see exactly when the VM moves buy watching the ping hic-up but you don't lose a single connection.
Amazing ain't it!
No comments:
Post a Comment