Archive | December, 2013

A tale of two server providers

28 Dec

I like servers. They let me do all kinds of cool things. Serving websites, running game servers, proxying traffic, experimenting with the new technologies. But selecting a server for my usage, now that’s a mixed bag that I’m never satisfied with.

My very first server was a Linode, since they seemed to have a decent reputation for dedicated hosting. Of course, this was before the credit card leak fiasco they had. They offered a server in Tokyo, which is as close as a server’s location could get to Singapore.

Then AWS’ free tier offering got too enticing, and I moved to a micro EC2 instance. I was basically only running a minimal website at that point, and didn’t feel like paying $20/month at Linode was justifiable. I still got some bills though; an extra instance was running that I forgot to shut down, no thanks to AWS’ unfriendly interface that split instances by regions.

But when it took me an hour to compile node.js on it (required for an integration with an online code editor), I decided that my server needed more juice as well. A pauper’s serving of CPU cycles and RAM wouldn’t do!

Once again, I switched back to Linode – their price plans offered the most server horsepower for the amount paid, and AWS’ pricing structure was more complicated than I cared to calculate.

But now, I am thinking of switching YET again. As much as I love the raw power provided, my linode lacked several crucial features.

  1. Packer integration – according to this github issue, apparently Linode’s API does not lend itself to integration with Packer, an up-coming technology that I am interested in trying out.
  2. Docker support – not provisioned for natively, plus the kernels it provides do not have the appropriate flags enabled that are required for Docker to run. I could compile my own kernel, but do I really want to go that far?
  3. The CentOS distro that I started out with to begin with did not support hosting a Starbound server out of the box. I tried a workaround, to no avail when the workaround failed to account for the game’s constant updates due to it being in beta. Not so much my linode’s fault than my choice of distro and being stuck to a single server, but still.

Finally, given that the minimum cost of such a server is not cheap, I was disincentivized to try different server setups.

But then I had a paradigm shift.

Did I really have to limit myself to a single server? My initial rationale was cost concerns and that a single server would fulfil all my needs. Which is not the case now, because I can identify the following contrasting needs:

  1. Maintaining a low-upkeep website. Although not hosting anything of import, over time I have offered to host various acquaintances’ sites, to better use the excessive “free” CPU cycles/RAM consumption/bandwidth my server had after paying the fixed-fee cost up-front (or in AWS’ case, the free hours). Therefore it is important that I at least attempt to maintain 24/7 uptime.
  2. Intensive periods of experimentation. This involves trying out new technologies, implementing new applications and infrastructure. This would result in a lot of CPU/memory/bandwidth consumed, as the necessary packages get downloaded and installed.

At heart, these 2 needs are at odds with each other. One requires low horsepower and constant uptime, the other high horsepower and on-demand uptime. Which could be best served by a lowly server that is always up and a powerful server that is only paid for when utilized. The latter is an on-demand instance or spot instance.

AWS offers a long-term pricing plan for servers, known as reserved instances. A 1 year plan for a m1.small instance (1xvCPU, 1xECU, 1.7 GB RAM, 160GB Storage) comes out to $67, which is a little over 3 months worth of my current hosting plan, a Linode 1024 (8xCPU – 1 priority, 1GB RAM, 48GB Storage).

It would seem like I am locking myself into AWS for a year, but cost-wise it will only be for 3 months.

On-demand/spot instances are awesome. They perfectly fulfil my experimentation needs (also leeway for a more powerful server), and can be be stopped and started at will. In this case, it is no more different than having to boot up my vagrant instance on my desktop whenever, and only shut it down right after.

One final thorny issue would be data. In the process of migrating between these 2 providers back and forth, transferring data has always been a manually cumbersome process. SCP from old server to my desktop and back to new server, or rsync between both. In any case, it really breaks the workflow of my beautifully crafted Puppet manifests. Plus a lack of planning around them means that no backups are available. I hope to change that when I switch servers, but I have not mapped out how exactly, yet.

Advertisements

Symlinks with Vagrant + VirtualBox

26 Dec

This was a very thorny issue for me early on, back when I was trying to update DocHub, which was powered by npm modules. The static HTML files it came packaged with were sufficient, but I wanted the LATEST versions. npm install tries to put files locally and symlink them, which Vagrant made a huge boo-boo about when my console erupted with error messages from npm. I eventually gave up in favor of zeal, which is godly amounts of awesome for an offline documentation browser.

A year later or so, I had to get symlinks to work again – this time while I was trying puppet-rspec. For some arcane reason, it needs to symlink a directory back to the original directory containing the code to be tested. Instead of referencing the files relatively in the code. Of course. This time though, I had better luck – I chanced upon a fix that actually worked!

https://github.com/mitchellh/vagrant/issues/713#issuecomment-17296765

Below are the steps I took personally (cribbled from the above link, of course)

1) Added these lines to my config :
config.vm.provider “virtualbox” do |v|
v.customize [“setextradata”, :id, “VBoxInternal2/SharedFoldersEnableSymlinksCreate/vagrant”, “1”]
end
2) ran this command in an admin command prompt on windows, while in the C:\Windows\system32

 

fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1
3) open a new command line, vagrant halt if necessary followed by vagrant up

 

This solution really needs more love than being hidden away behind a github issue comment. So here it is!