Friday, 29 February 2008

VMware renovations

A few months ago I purchased an HP ProLiant ML110G4 (QuickSpecs) server to use as an VMware ESX 3 server. Adding 4GB RAM, a Lights-Out 100c remote management card, an E200 SCSI controller, an SFF-8484 to SATA fan-out cable and four 500GB SATA2 HDDs completed my spending spree.

When I came to assemble everything, I realised I'd purchased only one fan-out cable (whoops!) and that the drive cage in the case would only hold four HDDs anyway. Accordingly I was only able to use four HDDs in total. I had intended to use the standard 160GB HDD for the ESX system disk and to have the four 500GB HDDs providing ~1.5TB of RAID5 protected VMFS storage. The decision was made to stick with the latter and to install ESX on there too. Not ideal but at the time I needed ESX running in a hurry.

As this year had an extra day (today) I thought I would finish the job properly. In preparation I'd purchased another SFF8484 cable and two low-profile 160GB SATA2 disks (for a bargain £24 each) to configure as a RAID1+0 system disk. Making use of my new workshop and a tiny bit of my son's Mecano stock, I used a 5¼" to 3½" HDD bracket to mount both drives in the spare 5¼" tray. Having these two disks mounted ¼" apart should keep them cooler and as quiet as possible.

I installed the bits and added a new RAID1+0 volume (#2 on the controller) as the system booted to a 3.0.2 update 1 build 61618 CD. I installed ESX from scratch to the cciss/c0d1 volume, leaving the original volume alone. Of course, on a reboot to the HDDs the original cciss/c0d0 install started up. As the RAID5 volume had been created first, adding the RAID1+0 volume was useless - the system will boot from the first volume.

So, how to reorder the logical volumes? In the past this was always a huge drama but I felt sure a quick Google would reveal how such things get fixed over time. I was disappointed. Oh well, I would just have to imagine the controller had died and I was replacing it with a new one.

Call me chicken but before I started I had copied everything to other storage so I knew I wasn't going to lose anything. To switch the logical drive numbers I...
  • Switched off
  • Disconnected all the disks
  • Powered on, the E200 found nothing and did not offer to run the BIOS configuration
  • Started again :-(
  • Connected only the two new RAID1+0 disks (less valuable in terms of data)
  • Powered on and saw a warning for two logical drives, one missing, hit F1 to continue
  • Quickly hit F8 to get to E200 BIOS configuration
  • Deleted the absent RAID5 volume - "You will lose all your data!" it said
  • RAID1+0 volume was now listed (significantly) as volume #1
  • Plugged the RAID5 disks in, waited for them to show up
  • Added a RAID5 volume, significantly now #2, with exactly the same config as before
  • Saved config and rebooted
Sure enough, everything booted and the new ESX install automagically added the VMFS storage from the original RAID5 set with all data intact. Great news!

After a bit of zip-TARring and data movement I cleared the RAID5 set of anything I wanted to keep and repartitioned it to one big VMFS storage volume. Job done.

No comments :