Suddenly a wild switch appears…

After my disappointment at the Ubnt Edgeswitches not supporting IPv6 routing I thought I’d be leaving my plans to upgrade the core network in my lab for another year or two, but then I had a thought, if two years ago my Nortel 5520 switch was being knocked out cheap on EBay as it had gone end of life, was there something newer going through the same thing now?

The answer as it happens is yes, the Extreme Networks Summit x450e-48P has recently dropped out of support, and apparently at least one US Government agency was using them, so you can pick them up from EBay from a re-seller for about $200. Cheap as chips!

They do IPv6 routing (although you’d need to find one with the Advanced license for  OSPF) and the firmware is available free from the Extreme Networks site (although I haven’t got the SSL module yet (because of US export restrictions you have to be a company in the US to get into the download area, and I’m not a company so I’m waiting to see if they hand it over anyway!).

There is less power available for PoE (370W rather than the Nortel’s 500W) but I’m not running enough PoE kit to ever hit either limit, so that not an issue, and best of all, they’re bright purple, so how could I refuse?

This is why there is now one sat behind me as I put together its new config. Replacing my 2851 router and 5520-48-pwr switch with this unit should get me faster routing and at least 100-150W off the power usage, which will help with the Arizona summer heat.

We’ll see when she’s up and running of course, but if it knocks off 100W I’ll be happy, and the payoff in power savings should break even in less than two years.

Now I just need to learn the ExtremeOS syntax, documentation seems pretty good though…

It’s Missing Which Feature?

Living in Arizona as I do, and it being hot like it is right now, my mind wondered to my lT lab. Sat there, fans whirring away turning electricity into little flashing lights and heat, could I do something to ease the load on the A/C and perhaps save a couple of dollars off the electricity bill too?

My main servers are pretty new, and don’t use a lot, so there’s not much I can do there, but my core network is some pretty old kit. My Nortel 5520-48-PWR switch may be the cheapest way to get 48 ports of Gigabit PoE enabled managed switch, but it’s not exactly quiet or energy efficient.

To replace it with something new would cost me far more than I’d save in power. I figured that the break even needed to be three years or so max to make it worth splashing out, otherwise I may as well wait a year or two for things to drop in price, or the new shiny to come out.

Thinking further, I also have a 2851 Cisco router in the mix, as the Nortel doesn’t do Layer 3 for IPv6. Ah ha! if I can replace both devices with a single budget L3 switch, I could make a power saving, stick the existing kit in the network lab that’s only on when I need it and justify some shiny new toy *ahem* essential piece of network hardware.

I’d need it to have a decent CLI, so ‘smart’ web GUI only switches were out, as they’re usually pretty awful to use. I do have a TP-Link Smart Switch on my desk, which means I only need one cable back to the rack. It does the job, but it’s not exactly nice to use, or full featured.

Based on some rough numbers, the saving in energy bill was only going to be a few hundreds a year, a decent saving, but not if the break even point is 10 years. This wasn’t going to get me top of the line kit, but at $750 odd a Ubiquiti  EdgeSwitch ES-48-500W was a 3ish year break even, and I’ve already got one of their WiFi access points, so they weren’t an unknown quantity to begin with. The numbers looked good, I could knock off at least 200W of power consumption, buy a new piece of kit and believe I was saving money while I do it! What more could you want?

Man-maths in hand, I pulled up the spec sheet, non-blocking, 10G uplinks, PoE+ Layer 3, STP, Link Aggregation, Policy Routing, all the things you need, and no support contract for firmware updates. Everything you’d want to see in a switch (if you were interested in switches!)  my heart soared. Could this be the one?

I pulled up the manual, looks easy enough to configure. But wait, the routing section doesn’t seem to mention IPv6. Surely this is a mistake. Perhaps they’re going to release it as an update later, they did with some features for the Wireless Access Points?

Over to the Ubiquiti forums, and a quick browse later my suspicions are confirmed. Not only does the current EdgeSwitch series not do IPv6 routing, it cannot do IPv6 routing, so not even the hope of it being enabled later. My heart sank. I’d need a second device to replace the router to make the savings worthwhile, but the payoff would be longer than justifiable.

This is rather infuriating, I can understand leaving features out for cost purposes, but given that IPv6 is becoming more prevalent (Even Cox manage to issue me IPv6 addresses!) I’d consider it core functionality going forward. Certainly given the life-cycle of networking kit I’d not want to buy something without parity between the IPv4 and IPv6 functionality. In the more than 10 years between the my Nortel hitting the market, and the EdgeSwitch being released you’d have thought that IPv6 routing would have become a standard feature.

I could still either route using a virtual router on my ESXi platform, but I’m not sure if my already laden TS140 can push the packets fast enough with the other core VMs running. My pfSense firewall is virtual, but internal routing is handled by the 2851 so I could use that, but then the saving isn’t worth it.

I’ll put this on the back burner and revisit it when the next version comes out in a year or two, hopefully the next iteration will fill this rather glaring hole in the feature set.


As the Tumbleweed goes by.

The trouble with blogging is finding the time to write one. Inspiration invariably strikes when one is not in a position to note it, and I rarely have the time to sit down and type when it does and I am near a keyboard.

Writing on a tablet or phone is pretty wretched at the best of times, it’s come a long way, but typing on a tablet is just not as conducive to writing and certainly not as satisfying as hammering away on a good mechanical keyboard with Cherry MX switches.

Oh the wondrous noise of productivity when you crack a tricky Powershell issue and the script just flies from your fingers, the rest of the office alerted to your success by the rapid clickyclickyclickyclickyclickyclicky of the keys, (Sometimes unfortunately followed by the particular sound of backspace being hit repeatedly) but I digress.

Laptops take that little bit too long to get out and get going, and aren’t exactly pocketable, perhaps I should try a Chromebook, or in the car maybe I’ll try Google Voice Typing, the results could at least be amusing.

Either way, perhaps so my wife doesn’t have to put up with me bemoaning some lost IT cause due to an unforeseen bug, or undelivered promise I should make the time to note it here. I doubt many (if any!) will read it, but it gets the idea out of my head.



Home Lab 2.0 – Storage network

No matter what virtual machine (VM) platform you’re building, you need a solid foundation. One of the most critical choices is where to store your VMs and how the VM hosts will access it.

Usually the storage is separate to the VM host, however this doesn’t have to be the case. If you wanted to keep a lab environment as compact as possible you can always store the virtual machines on the same server as the storage, for large scale deployments there are some options like the Virtual SAN from VMWare if you want to distribute the storage between all the nodes.

I want to have a separate storage server for two reasons, first I can lump all the storage in one device, not just for VMs, but media and file storage too, and secondly it makes adding more VM nodes down the line very easy if I want HA or need more oomph. It also keeps the VM host very simple.

Given that my storage and my VM host are therefore separate machines I’ll need some form of interconnect. Excluding fun (but pricey!) stuff like Fiber Channel I’m left with the network options, namely iSCSI or NFS.

iSCSI (Internet Small Computer System Interface) has been around for a while now and is a block level protocol, meaning it allows a device to access storage across a network as if it were a drive, rather than a file share. It appears complex, but setting it up is remarkably easy once you get your head around it.

NFS (Network File System) is the standard way in the Linux/Unix world to share files, much like SMB is for Windows. It is a file level protocol, meaning that it can access individual files and folders across the network, rather than the all or nothing of iSCSI.

Given these two options are supported by most VM hosts and the performance difference between them seems to be negligible at best the choice is largely down to your preference.

I chose iSCSI for two reasons,one being that until NFS 4.1 the majority of traffic to a single data store uses a single connection, so at best I could use LACP for redundancy across NICs, but wouldn’t see the doubled bandwidth I might. Also different systems support different NFS versions, so depending on the VM platform I chose, I may or may not get to use the version of NFS I want.

So I went with iSCSI, which gives me MPIO multipathing, this allows me to use several NICs on each end for more bandwidth, but does bring in its own issues, namely latency and security. iSCSI is an unencrypted protocol, which means anyone who can sniff the packets as they go down the wire can see the data (unlikely, but I’m building this properly!) you can add IPSEC encryption to the network but that’s a big overhead. The second is latency, the less you have the better it works.

These two factors however lead to the same solution, and a simple one at that! I’m using a second network for storage. I strictly speaking don’t need to add a switch at this point, but I have for later, which leads on to something rather surprising; iSCSI works best on unmanaged switches! The general consensus on the SpiceWorks forum is that the extra processing done by a managed switch adds enough latency to make a difference, and who am I to disagree, especially as I have an old Netgear JGS524 kicking about that will do the trick! Also, by separating out the storage traffic it means I don’t need to worry about bandwidth use through my Nortel network switch and you have to be on the storage network to sniff it, mitigating the lack of encryption. A bit more power usage than a single switch, but hopefully worth the trade-off.

Set up through a temporary Nas4Free box to a Citrix Xenserver node it works a treat, although only a single storage NIC in each for now. Once I get a proper storage server set up I can start playing with multipathing and putting some load on to see what I can get out of it.

Home Lab 2.0 – Networking Core Switch

Over the years I have assembled a small motley collection of networking kit however nothing quite fitted the bill for what I need for my new lab. I have managed kit, but only 100BaseT and I have Gigabit kit, but only unmanaged.

What I needed was something with a good amount of ports, managed, Gigabit and with PoE because I can, and it’s useful for stuff later on. I also needed the 5000 bucks to buy one. Not wanting to blow the entire budget on a single bit of kit it was off to eBay to find something ex-corporate. Now I could have grabbed whatever Cisco unit ebay had to offer, but again they’re not exactly cheap.

Step up Nortel! Oh wait…

Nortel went bust some years ago, and the company was sold off in various chunks, but they used to make some solid networking kit. As they’re no longer about, and they’re much less desirable than the big Cisco their switches are a lot cheaper second hand than almost everything else. In fact, after looking around a bit I picked up a Baystack 5520-48T-PWR, which gives me 48 ports of lovely PoE Gigabit managed networking, delivered from a seller the other side of the States for under 250 dollars. Nice!

The fun thing about Nortel switches is that the switch arm was bought by Avaya, who still sell them, in a different color box. They’ll even support the original Nortel boxes if you want to spend money on support. In fact, when I booted the switch, I got this:


That’s right, the logo on the outside doesn’t match the inside.

Of course, as with any bargain there is a catch. In this case though it’s not exactly a deal breaker. The switch is a full layer 3 switch for IPv4, but doesn’t contain the hardware to do IPv6 routing so it’ll only do layer 2 for IPv6. This isn’t a massive deal unless you’re running multiple IPv6 VLANs and you can always add a separate router to add IPv6 to your network later, in fact a quick glance at eBay suggests you can get something pretty heavy-duty from your favorite network kit provider for under 200 bucks.

As people I worked with might tell you I’m not the world’s biggest Avaya fan, having had to work with an old IPOffice system, and their support can be pretty ropey at times, but I’m not doing anything particularly taxing or out of the ordinary so we’ll see how it goes.

It’s now sitting on my desk humming away and routing IPv4 VLANs quite happily, it does indeed do PoE and it’s not too horrendous to configure. In fact, I’ve pretty much only needed to glance at Michael McNamara’s excellent blog to get 90% of the configuration done I needed (In fact, if you Google 5520 and what you need, the first link is nearly always his site anyway).

Now I just need to finish off the LackRack and get it mounted.

What’s in a CNAME?

While setting up the first server of my lab I thought I would add a small aside here on Canonical Name records (CNAME) with DNS. A lot of the DNS documentation out there explains what a CNAME is, but not always why you’d want to use it.

To begin with lets look at the difference between a standard A record and a CNAME, I’m using the ever-present as the domain, in the style of Microsoft.

An A Record points directly to the server IP address, much like a name in a phone directory (remember those?) points to a phone number. When a computer looks up an A record it gets the IP ready for direct communication. They look something like this on the DNS server:     A

A CNAME always points to an A record, like an alias for the server. When your computer looks up a CNAME, the DNS server will replace the CNAME with the A record and reply with that. A CNAME pointing at another CNAME will probably work, if your DNS server will allow you to add the record, but don’t! it’s bad and it leads to confusion, mistakes and someone else looking at your DNS records and asking “Which Muppet did this then?”. They look like this:     CNAME

It’s a good idea when setting up any service to create a CNAME for that service, in the above example a web server (in fact, you’ll definitely want to do this for multiple web sites on the same box). This allows you to move that service to another server later on without needing to do any more than change where the CNAME points.

For example, I’ve just set up a NTP time server on my Ovirt Engine server (Spoilers!) however I may in the future find that the box isn’t capable of controlling my virtual server host nodes. If I had used the A record directly, I would have to go into each node and change the server address, or rename the old server and give the name to the new but then what about my time service? Both is much more work than a quick re-point.

So my DNS records would look like this:         A           CNAME   CNAME

I’d add the new server:         A        A           CNAME   CNAME

Then when I’m ready to migrate just tweak the CNAME to repoint the service:         A        A           CNAME   CNAME

This also means I can run them concurrently, and roll-back is as easy as reverting the CNAME change.

The catch!

And there is one, not a big one, but it can cause and issue. Each DNS record has a Time To Live or TTL which tells a machine how long it should cache the record for in secords before it checks again with the server. This is normally a good thing, it means less load on your DNS servers and less DNS traffic, however if I adjust my CNAME as above and the TTL is set for say 1800 seconds it can take a machine 30 minutes to see the change.

The easy way around this is to remember to reduce the TTL before you make the change to a nice low number, remembering you’ll need to wait at least the old TTL for that to take effect. Don’t forget to reset it afterwards!

Home Lab 2.0 – Desk

Before I get stuck into racking everything into my LackRack I decided I needed somewhere I can work on the kit, which means I first had to find a desk. My monitors are mounted on bolt-through poles, and I didn’t fancy sitting on the floor with them propped up around me!

I’ve been building desks with my Father for many years, usually using cheap legs from Ikea with an edge-lipped MDF top, however I’m now 6000 miles and half the voltage from his power tools and assistance (not to mention his outstanding AutoCAD skills), so I wanted something I could bolt together quickly and wouldn’t need to put a finish on. Having moved from the UK with all the associated expense, I needed it to be reasonably cheap and not need much in the way of tools.


I have a nice alcove in the spare room that is pretty much the depth of a full size desk (800mm) and rather handily just over the width of two full size desks (1600mm a piece) so it was off to (where else!) Ikea for some of their Galant desks to fill the space. These are their standard office desks and for the 150 dollars ish you can’t really complain. I’ve outfitted an office with these before and they held up about as well as the 600 quid desks they were next to.

Fueled by tasty meatballs I soon had two white (cheapest!) desks nestled in there. To stop them shifting about and leaving an unsightly gap I also picked up one of their cheapest end table frames and poached the connecting bars which allowed me to bolt the desks together. It’s a shame they don’t do those separately, but at 20 dollars it was hardly bank breaking!desk_joining_kit

These are the bits you need from a frame, I used part number 900.568.89 which seemed to be the cheapest way to get them.

And this is the result:


Exciting indeed, but if that was it you’d be wondering why I bothered to write this. Of course this is where the fun began. There is just enough space to squeeze an Ikea Ivar upright at each end. (Excuse the lighting, for some reason this room doesn’t have a central light fitting and we’ve not found a suitable table lamp yet).


This allows me to start doing this:


I’ve done this before, but that time we screwed through the underside of the desk to bold the Ivar uprights down. This time they are freestanding as I added a bracket under the back edge of the desk to the tall uprights to stop them falling forward if they were pulled. Don’t forget to add cross bracing to at least one pair of uprights, I have one set each end on pairs with the longest shelves.


50c in Ikea well spent!

Of course nothing in life is ever easy, and we faced one critical issue. There is no combination of Ivar units that is the same as the length as my two desks, so we were going to need a fill in. I could have just made up some shelves the right length, but fortuitously we spotted these MDF boxes in Ikea which just happened to be the right size, and on sale for about 7 bucks a piece! About the same cost as buying some wood to make the shelves and no need for a saw, sander and so on.


A couple of holes in each side (clamp a piece of wood under where you’re drilling to help protect the finish when the drill comes out the other side) and then we can get them fitted.

I started with the top box clamped to a piece of wood running across the top shelf to make it line through, then worked my way down. One of the wooden drawers from these old Ikea drawer units was a good size, so I pressed them into use as spacers.



I marked out the holes for each box, drilled a pilot hole then bolted them to one side of the uprights. This allowed me to pull one side of the shelves forward so I could get the drill in to make the holes as I went down. To neaten up the gap between the box and the uprights so you couldn’t see the bolt, I slipped an O-ring in the gap.

I used lag bolts for two reasons, firstly they only had countersunk screws in Home Depot and I didn’t want to countersink the holes, the other was it meant I could do them up quickly with a ratchet rather than awkwardly with a screwdriver!

The gaps between the boxes form little shelves, so there’s a decent amount of space there. I was going to put a plywood plate across the gap at the back, but it seems solid enough not to need it.


The shelves finished it was time to look at the cable management. First I drilled out holes for the monitor poles and some cable access grommets, one for each pole.


If you are going to do this, take my advice and run a small pilot hole first, then drill up from the bottom through the finish, then down from the top. If you don’t the veneer on the desk is quite prone to cracking and the drill can tear it up on the way out! I’d use a bi-metal hole saw if you can for the grommets. Either way take it slow and watch the bits don’t get too hot, they’ll blunt quickly and it’s surprising how quickly they do going through the desk.

Under the desk I added two Galant cable trays and some power bars from Fry’s Electronics for power, each one plugged into a single socket surge protector They are just zip tied around the frame, I could have done something more permanent but there’s not a lot of point. I fitted the baskets back to front as it made cabling through the holes easier. You can slide two of the tabs that hold them on between the table frame and top to make life easier.


Under my last desk I had three sectional trunking with phone, network and power. By wiring switches on the top of the desk I could turn off everything but the PCs themselves when you left the desk to save power. Sadly cost constraints stopped me from doing this right now, but it wouldn’t be hard to add it later. Besides, wiring up that many 13A sockets in a ring configuration under a desk was not exactly a job I’d choose to do again although the result was very nice.

I always try and keep the cables up out the way and not trailing on the ground, this not only looks neat and tidy it makes vacuuming a lot easier as you don’t have to move a mat of cables out the way and they’re less likely to get damaged.

Now all that is left to is bolt the monitors on. This is what it looked like before I began cabling, which takes longer than making the desk, and I’m still doing:


I use four monitors and they are a rather motley assortment as I have gained them over the years. With an SLI rig I can either use all four when I’m working on something, or one at full pelt for games. It really does come in handy when you have a web page open, a couple of remote sessions and some monitoring stuff up to see it all at once. Ideal for setting up and using a lab.

At the other end is my wife’s monitor, she uses it with her laptop via a Toshiba Dynadock and is happy as it is, apart from when I cover the desk in my stuff. Once I’ve got the monitor heights fixed I might add another shelf above the two single monitors, but for now, I just have to finish up and put all the stuff in boxes on the floor on the desk!

It took about a weekend from a Friday night plan to this state, including buying the stuff, grocery shopping and various other errands. If you didn’t want a lie in you could knock it out in a day.

Home Lab 2.0 – Racking

For the past year most, if not all of my home lab has been packed into boxes. First in storage, then in shipping to our new home. This has made it tricky to do much (apart from sorting out my parents’ kit). Now we’ve finally settled I can rebuild it however rather than just bolting it all back together again it’s time for a rethink, and of course upgrades!

The first issue was where to build it. It used to all live in a 48U rack I was kindly given by my old work. Handily it breaks down to four uprights, a top and a bottom so getting it into position is not too bad. The spare room has full height cupboards with sliding doors that we could put the rack in, then when we have guests I can shut down all unnecessary kit and close the doors. A match made in heaven, apart from the small issue of about a quarter of an inch clearance. Nuts.

We needed a plan B, but what? I could just heap the stuff up on the floor, but that’s hardly ideal, and racks are not exactly cheap and I’d rather sink the cash into new hardware than metalwork. So step forward the LackRack from eth-0!

I’d seen this a while ago, and it should do the job of keeping everything neat and tidy without costing me the earth. I had a Lack table that hadn’t survived shipping from the UK, but dropping it into the wardrobe showed it would just fit. There is a shelf in the wardrobe, three Lack tables will fit underneath it stacked on each other and I need three tables to house all the kit!

After dining on meatballs at the Swedish furniture emporium known as Ikea the result was this:

IMG_1954 IMG_1955

As you can see it fits just with a gnat’s whisker to spare. The longer table will take servers, the top ones will take the various bits of network kit and power distribution. The kit will all have to face to the left to stop the door catching on patch cables, but it should do the trick nicely.

the legs are hollow which means I may need to add some reinforcement to take the weight of the kit, but I’ll see how that goes as I start to add the kit. Who knows, I might even manage more than one post a month!

The road to Amahi

Earlier this year Microsoft announced the end of TechNet Direct. For those of you unfamiliar with this it was one of the best things MS ever did, and will be sorely missed. For a couple hundred quid a year you had access to nearly every piece of software MS offered as long as you didn’t use it in ‘production’ (this meant you could build test platforms, try stuff out, but the moment it started being used for work, you had to pay for full licences).

This was a bit of an issue for me, working with MS platforms I had set up a pretty big test domain and had placed a server at my parents house which managed their machines as well as replicating files between their server and mine giving an off-site backup for all of us. Very handy, and it allowed me to test things running over a WAN. In fact, as I was waiting for my visa at the time I took the opportunity of the HP cashback offer on N54L microservers to upgrade the little Atom box I had there, and split the services running on it into multiple servers running on Hyper-V.

I’d just got it running beautifully when MS announced that Technet Direct was no more, and as such my server would be unlicensed at the end of my subscription. Not having the cash for multiple 700 quid Server 2012 licences I decided to finally get stuck in to the alternatives.

I’ll write up the fun I had trying to get Samba 4 to run as a Active Directory Domain Controller with a MS Server 2008 R2 DC later, but for now lets just say after a lot of tweaking, it just wasn’t going to be viable, given that it was going to have to run for at least two years trouble free and only being able to access it remotely.

So I needed a simple solution, something mainly to back up my parents’ files from their desktop and laptop, and act as an off-site backup for my own data. I’d been using nas4free earlier but without some form of authentication server it was being a pain, and then as a final straw it dropped its configuration right when I rather needed it. I’ll probably be using nas4free again later, so don’t take this as a suggestion it’s awful, just not quite what I needed for this.

Knowing Windows based kit much better than *nix I looked back at Windows Home Server, which seemed perfect for this, apart from the fact that the last version is 2011 and MS have canned it, so there’d never be an upgrade. I could have gone for Server Essentials, but at over 200 pounds it would be quite a spend for this and given that Essentials has to hold the FSMO roles, if I added my own to the network later that would be two separate AD domains I’d have to look after, again, not what I was looking for.

Surely some chaps had produced a *nix based equivalent to Windows Home Server though, and a quick Google brought me to Amahi. I took a deep breath, a large backup and threw the latest Amahi 7 Express on the server. I installed it on a 250GB drive that came with the microserver that I’d tucked away in the top of the server (Where a DVD drive would normally sit) and after a bit of a poke around the GUI it seemed to be what I needed. Be careful of the partitions here, the default created a /home partition which was all the spare space, and a / that was only about 45GB, this created a bit of an issue as I shall reveal below.

Then I added four storage drives into the four drive slots, and after a quick poke around the wiki I had them added, but with a partition alignment error when I mounted them, so I used fdisk instead of their suggested tool, and that seemed to do the trick.

Amahi has a spinoff project called Greyhole which is similar to the WHS system. Rather than a traditional RAID system it pools all the drives into one big store. When you copy a file to a share it’s stored there until Greyhole copies it off onto the storage drives, you can specify how many drives you want files from each share to be stored on, and thus how protected they are. For example I have four drives, and I’ve set a share to store two copies, this means that any given file is on at least two separate drives so if one fails I have another copy. Setting the number of copies higher means I could have two drives with the file on fail and still be ok, however this comes at the expense of total storage space. This also means that unlike a RAID volume you can replace any drive with a larger one to give a bigger pool, and you don’t need matched drives. Handy when you have a collection of mismatched drives, or you just need that little more space!

Configuring Greyhole was quite simple, once I realised that for Amahi 7 Express the GUI options don’t exist yet and you have to configure it manually, which isn’t as hard as it appears. Just follow the wiki and you’ll be fine. You’ll need to uncomment or edit the shares you need in the /etc/greyhole.conf file, it’s easy enough, just look for the Shares Settings and Storage Pool sections, and follow the examples.

Finally I started copying the files from the backup to the shares, and this was where I discovered the partition issue. Greyhole can’t clear down the landing zone when it’s completely full. With only a 45GB landing zone this was a pain when you have hundreds of gigs to copy! You can fire off the Greyhole process with greyhole -f at the command line, but it’s a race as to whether it can process files as fast as they’re copied to the share. I grabbed a GParted ISO and resized the home partition (there is very little usage anyway it appears) giving me a much larger landing zone. If I didn’t have the space on the system drive I’m using, then you can also move the landing zone to another drive.

Seems to be working nicely now, I need to set up a backup to an external drive just in case, and there’s some more bits and pieces to do but I’m happy with it so far. If they get the documentation squared away a bit and finish off some of the plugins it’ll be pretty hard to beat as a home server for most people. Most of the plugins are paid for (this is how they fund the server) but they all seem to be around 5 US Dollars, so they’re not exactly spendy. It being Fedora under the hood you can always roll your own though.

I’ll write more as I play with it further, but for now, it’s a neat solution for those not wanting to get too technical. Perfect for the parents!

Another new beginning…

For not the first time I’ve been playing around with this, although this time not so much by choice! This blog was previously hosted at 34SP who had been fine hosts for years, however they’ve now discontinued their personal hosting offering, and paying 5 quid a month for something I didn’t really use properly seemed a little excessive!

They’ve always been good, but as I’ve slowly moved my email first onto my own servers, then to and my domains to DynDNS I can’t really justify paying for their Pro hosting. So it’s farewell to them, and hello to the free WordPress hosting. I splashed out a whole $13 to use my own domain, big spender I know, but if I don’t update this, then I’m not out of pocket by much, and if I do, I can always migrate it elsewhere.

For those of you looking for a good host, then they’re worth checking out, the tech support has always been first rate and I’ve never had an issue with them.

I’ve not bothered to move across the two or three posts from the old blog, I can’t get into it right now and I don’t think they’re worth saving particularly, so they might appear, but probably not. I’m more likely to just rewrite the bits I did.

So expect the odd post on projects I’m working on, probably IT, car or mountain bike related and changing the country I’m living in as time goes by, but I wouldn’t be rushing back every day for new content!