Home Lab 2.0 – Storage network

No matter what virtual machine (VM) platform you’re building, you need a solid foundation. One of the most critical choices is where to store your VMs and how the VM hosts will access it.

Usually the storage is separate to the VM host, however this doesn’t have to be the case. If you wanted to keep a lab environment as compact as possible you can always store the virtual machines on the same server as the storage, for large scale deployments there are some options like the Virtual SAN from VMWare if you want to distribute the storage between all the nodes.

I want to have a separate storage server for two reasons, first I can lump all the storage in one device, not just for VMs, but media and file storage too, and secondly it makes adding more VM nodes down the line very easy if I want HA or need more oomph. It also keeps the VM host very simple.

Given that my storage and my VM host are therefore separate machines I’ll need some form of interconnect. Excluding fun (but pricey!) stuff like Fiber Channel I’m left with the network options, namely iSCSI or NFS.

iSCSI (Internet Small Computer System Interface) has been around for a while now and is a block level protocol, meaning it allows a device to access storage across a network as if it were a drive, rather than a file share. It appears complex, but setting it up is remarkably easy once you get your head around it.

NFS (Network File System) is the standard way in the Linux/Unix world to share files, much like SMB is for Windows. It is a file level protocol, meaning that it can access individual files and folders across the network, rather than the all or nothing of iSCSI.

Given these two options are supported by most VM hosts and the performance difference between them seems to be negligible at best the choice is largely down to your preference.

I chose iSCSI for two reasons,one being that until NFS 4.1 the majority of traffic to a single data store uses a single connection, so at best I could use LACP for redundancy across NICs, but wouldn’t see the doubled bandwidth I might. Also different systems support different NFS versions, so depending on the VM platform I chose, I may or may not get to use the version of NFS I want.

So I went with iSCSI, which gives me MPIO multipathing, this allows me to use several NICs on each end for more bandwidth, but does bring in its own issues, namely latency and security. iSCSI is an unencrypted protocol, which means anyone who can sniff the packets as they go down the wire can see the data (unlikely, but I’m building this properly!) you can add IPSEC encryption to the network but that’s a big overhead. The second is latency, the less you have the better it works.

These two factors however lead to the same solution, and a simple one at that! I’m using a second network for storage. I strictly speaking don’t need to add a switch at this point, but I have for later, which leads on to something rather surprising; iSCSI works best on unmanaged switches! The general consensus on the SpiceWorks forum is that the extra processing done by a managed switch adds enough latency to make a difference, and who am I to disagree, especially as I have an old Netgear JGS524 kicking about that will do the trick! Also, by separating out the storage traffic it means I don’t need to worry about bandwidth use through my Nortel network switch and you have to be on the storage network to sniff it, mitigating the lack of encryption. A bit more power usage than a single switch, but hopefully worth the trade-off.

Set up through a temporary Nas4Free box to a Citrix Xenserver node it works a treat, although only a single storage NIC in each for now. Once I get a proper storage server set up I can start playing with multipathing and putting some load on to see what I can get out of it.

Home Lab 2.0 – Networking Core Switch

Over the years I have assembled a small motley collection of networking kit however nothing quite fitted the bill for what I need for my new lab. I have managed kit, but only 100BaseT and I have Gigabit kit, but only unmanaged.

What I needed was something with a good amount of ports, managed, Gigabit and with PoE because I can, and it’s useful for stuff later on. I also needed the 5000 bucks to buy one. Not wanting to blow the entire budget on a single bit of kit it was off to eBay to find something ex-corporate. Now I could have grabbed whatever Cisco unit ebay had to offer, but again they’re not exactly cheap.

Step up Nortel! Oh wait…

Nortel went bust some years ago, and the company was sold off in various chunks, but they used to make some solid networking kit. As they’re no longer about, and they’re much less desirable than the big Cisco their switches are a lot cheaper second hand than almost everything else. In fact, after looking around a bit I picked up a Baystack 5520-48T-PWR, which gives me 48 ports of lovely PoE Gigabit managed networking, delivered from a seller the other side of the States for under 250 dollars. Nice!

The fun thing about Nortel switches is that the switch arm was bought by Avaya, who still sell them, in a different color box. They’ll even support the original Nortel boxes if you want to spend money on support. In fact, when I booted the switch, I got this:


That’s right, the logo on the outside doesn’t match the inside.

Of course, as with any bargain there is a catch. In this case though it’s not exactly a deal breaker. The switch is a full layer 3 switch for IPv4, but doesn’t contain the hardware to do IPv6 routing so it’ll only do layer 2 for IPv6. This isn’t a massive deal unless you’re running multiple IPv6 VLANs and you can always add a separate router to add IPv6 to your network later, in fact a quick glance at eBay suggests you can get something pretty heavy-duty from your favorite network kit provider for under 200 bucks.

As people I worked with might tell you I’m not the world’s biggest Avaya fan, having had to work with an old IPOffice system, and their support can be pretty ropey at times, but I’m not doing anything particularly taxing or out of the ordinary so we’ll see how it goes.

It’s now sitting on my desk humming away and routing IPv4 VLANs quite happily, it does indeed do PoE and it’s not too horrendous to configure. In fact, I’ve pretty much only needed to glance at Michael McNamara’s excellent blog to get 90% of the configuration done I needed (In fact, if you Google 5520 and what you need, the first link is nearly always his site anyway).

Now I just need to finish off the LackRack and get it mounted.