Linux VMs and OCFS2 shared physical RDM September 27, 2010
Posted by vneil in Linux, storage, VMware.add a comment
We’ve recently had the need to setup Linux virtual machines using large shared disks with the OCFS2 cluster filesystem. To complicate matters some of these Linux servers in the cluster might run as zLinux servers on a mainframe under z/VM, all sharing the same LUNs.
I had setup a couple of test virtual machines in this scenario, the first I setup with a physical RDM of the shared LUN and enabled SCSI bus sharing on the adapter. Then for the second test virtual machine I pointed it at the mapped VMDK file of the physical RDM from the first virtual machine, this seemed to be obvious way to setup sharing of the physical RDM between VMs. This all works and both Linux servers can use the LUN and the clustered filesystem. The problem I came across is that as soon as you enable SCSI bus sharing vMotion is disabled. I wanted the ability to migrate these virtual machines if I needed to so I looked into how to achieve this.
If you only have one virtual machine and do not enable the SCSI bus sharing then vMotion is possible. The solution I came up with was to setup each virtual machine the same, ie. each to have the LUN as a physical RDM but of course once you allocated the LUN to one virtual machine it is not visible to any other VM. The easiest way to allow this to happen is with the config.vpxd.filter.rdmFilter configuration option, this is set in the vCenter settings under Administration > vCenter Server Settings > Advanced Settings and is detailed in this KB article. If the setting is not in the list of Advanced Settings it can be added.
As Duncan Epping rightly describes here it’s not really wise to leave this option set to false. What I did was to set the option to false, carefully allocate the LUN to the three Linux servers I was setting up and then set it back to true.
With the virtual machines setup like this the restriction is that they cannot run on the same host so I set anti-affinity rules to keep them apart.
It is also possible to do with via the command line without the need to set the Advanced Setting as described in this mail archive I found. I haven’t tried this method.
I don’t know if this setup is supported or not but it seems to achieve the goal we need. I’d be interested to hear any opinions on this.
VMware and SLES: a good match? September 12, 2010
Posted by vneil in Linux, VMware.1 comment so far
I’m afraid this is just a bit of a moan.I have seen a few posts lately regarding VMware’s partnership with Novell and SLES. I thought I would post my experience of running an ESXi cluster with SLES Linux virtual machines.
We started running SLES 10 on a ESXi 3.5 cluster and for the VM tools we started using the RPMs packaged from VMware. This was fine until we upgraded to ESXi 4.0 on the hosts, after that VM tools that are installed from the RPMs show up in vCenter with a status of “unmanaged“. This is as described in the documentation. We decided then to switch to using the VM tools installed from the ESXi hosts and VUM, this meant that the status returned to “OK” in vCenter and we had the benefit of being able to upgrade the virtual machines’s VMtools with VUM.
The problem started when we started upgrading our Linux servers from SLES 10 SP3 to SLES 11 SP1, it seems the VM tools distributed with ESXi and downloaded via VUM do not distribute binary kernel modules for SP1 , they have SLES 11 GA but not SP1. This is fine if you have SLES11 SP1 virtual machines with compilers and kernel sources but our production servers do not have these for obvious reasons. The upshot of this is that we have switched back to the RPM distributed tools and will have to put up with the status saying “unmanaged“.
I have checked the VM tools that come on the ESXi4.1 installation and these also do not have the binary modules for SP1.
This is quite annoying as we had a small battle with the other Unix admins to switch to using VUM to upgrade VM tools and to now have to switch back to RPMs means it will be almost impossible to ever switch back to using VUM as and when SLES11 SP1 is properly supported with VMware ESXi.