My main goal with setting up this blog is to share my insights and discoveries in computing. These include my experimental setups surrounding Linux and my professional experiences with Data Center Systems (SANs, VMWare, SQL Clusters, etc.). Over the next few weeks, I’ll begin to document my attempts to build a Linux-HA system utilizing different setups. The software I’ve tested include DRBD, Gluster, SCST and the GUI interface LCMC. LCMC is a GUI for configuring either Corosync/Pacemaker or Heartbeat/Pacemaker.
I’m a huge fan of Ubuntu Linux and built my systems on a VMWare ESXi host and two small physical servers.
My appraoch will be a modular one. Many of the systems I’ve tried resulted in the re-use of software packages. Instead of just giving you one recipe for each system, I’ll write up instructions for each module and then you can use these as blocks to build your own SAN.
My test SANs were in a virtual environment utilizing Ubuntu as my Linux Distro with the following setups…
- Two DRBD nodes, SCST for the iSCSI Target and LCMC for the Configuration GUI.
- Two GlusterFS nodes, local GlusterFS Client, iSCSI and LCMC
- (Most interesting) Two GLusterFS nodes, Two Gluster Client / iSCSI Head Nodes and LCMC. This setup allows the iSCSI Head Nodes to scale up to as many GlusterFS nodes you can build.
- I then attempted the two nodes setups on physical boxes to compare throughput performance.
From the above, you can see there is a lot of repetition going on. You may be asking why I chose SCST instead of (lets say) IET Target. From my research, IET doesn’t support Persistent Reservations (SCSI-3) which is required for VMware and Windows Clustering.
I’ll cover the pitfalls I ran into in building each module.
One special note I must make. If you decide to build a SAN EVERYONE online insists you implement STONITH (Shot The Other Node in the Head) to prevent data corruption from a split brain situation. As I find time, I’ll work on adding a STONITH setup. The most popular method is to use an APC UPS with a serial connection to remotely turn off the offending node.