SELinux troubleshooter – too good to be true

30 Jan

Currently, I have the uphill task of trying to work with SELinux while reacquainting myself with the apparently simple task of setting a webserver up – in particular, nginx and php-fpm.

 

Of course, I encountered a whole host of permission errors that are common even without SELinux to begin with, but SELinux threw in an extra layer of permissions that I had to get around. While disabling SELinux is a common response, I had watched SELinux for Mere Mortals, and it pretty much inspired me not to take the easy path. Of course, I quit watching halfway in because of the terribad camera angle and ended up googling for other tutorials.

 

The basics of SELinux troubleshooter was that it could solve most common permission issues encountered, you mostly had to it installed first:

yum install setroubleshoot

and then run the checker AFTER you encounter the error (so it gets logged)

sealert -a /var/log/audit/audit.log

The suggested instructions that follow on screen (to fix the issue) together with the error messages should be pretty self-explanatory, but for my case in particular it boiled down to:

grep nginx /var/log/audit/audit.log | audit2allow -M mypol

and

grep php-fpm /var/log/audit/audit.log | audit2allow -M mypol

And then to apply the generated policy:

semodule -i mypol.pp

 

 

 

However, the odd thing was that these 2 error messages didn’t appear together. If I “fixed” it for nginx, there would be an error for php-fpm later, and vice-versa.

Obviously some conflict between the two policies for both php-fpm and nginx, so hmm, how about if I draft a policy for both at the same time?

grep -E “nginx|php-fpm” /var/log/audit/audit.log | audit2allow -M mypol

was what I ended up with.

The odder thing that happened though, was that it didn’t work if I performed the commands after encountering the php-fpm error. It only did the commands came after encountering the nginx error.

Now looking at the generated mypol.te  files, I could see what was the contents of the policy being applied (which is actually in binary form, in mypol.pp).

The contents are as follows, the bolded parts are the different parts:

After php-fpm error, the non-working version:

module mypol 1.0;

require {
type httpd_t;
type vmblock_t;
class dir { search getattr };
class file { read getattr open };
}

#============= httpd_t ==============

#!!!! This avc is allowed in the current policy
allow httpd_t vmblock_t:dir { search getattr };
allow httpd_t vmblock_t:file open;

#!!!! This avc is allowed in the current policy
allow httpd_t vmblock_t:file { read getattr };

 

 

After nginx error, the working version:

module mypol 1.0;

require {
type httpd_t;
type vmblock_t;
class dir { search getattr };
class file { read getattr open };
}

#============= httpd_t ==============
allow httpd_t vmblock_t:dir search;

#!!!! This avc is allowed in the current policy
allow httpd_t vmblock_t:dir getattr;
allow httpd_t vmblock_t:file getattr;

#!!!! This avc is allowed in the current policy
allow httpd_t vmblock_t:file { read open };

 

 

 

Now, I have no idea what the differences really entail, or why the policies were generated differently; someday I may review it to better understand SELinux. For now however, it works and I have development work to be done on the now-functioning webserver!

Yet another dual boot Windows/Linux horror tale

7 Sep

In preparation of studying for the RHCSA 7 cert, I wanted to install CentOS 7 in addition to my existing Windows desktop.

Unfortunately, I (in)conveniently forgot my past issues when trying to get CentOS 6 installed for dual boot. Back in my unconscious I had assumed that with my new UEFI motherboard, it could handle things that my previous motherboard couldn’t. And thus I forgot the painful distinction between MBR for Windows (and Windows only) and GRUB (at least it tries to play nice, even if it looks ugly). For reference, my past solution was to use Neosmart EasyBCD to dual boot from MBR.

What follows is a tale of horrors if you just stick to the defaults.

My installation process was pretty straightforward. Download an ISO, make it bootable on a USB drive using the amazingly useful tool Rufus. Then proceed to set a spare partition in Windows, and reboot to the USB.

Everything went fine, I was very impressed by GNOME3, right until at the end of installation and having to reboot – that I found myself without an option for booting to Windows 7. Uh oh. Quick googling on my phone reminded me that to boot to Windows, you had to rely on MBR no matter what. I also found by accident that going into command line GRUB2 then typing exit reverted to the MBR bootloader, but even then I encountered a 0xc000000e STOP error, possibly related to my screwing around with the MBR using Neosmart EasyBCD.

At any rate, I had to rely on GRUB2 now. Or at least fix the MBR back. But then I hit yet another hitch – CentOS doesn’t support NTFS right out of the bat, wtf! I needed the ntfs-3g rpm, but I had no internet access. I rely on a wireless USB dongle and had my DVD drive removed, so I lacked the drivers to get wireless up which were on a mini-CD that came with the adapter I bought (even then it probably only has Windows drivers). No yum option for me.

A couple minutes later I remembered I had a dusty old unused Chromebook lying around, phew. Getting the EPEL6 version of ntfs-3g (the source RPM for “ALL versions” does not mean universal installer, it means source code. Classical newbie mistake right there!) worked, and a quick configuration of GRUB2 later:

1) Open up GRUB2 file for editing:

vim /etc/grub.d/40_custom

2) Add to the bottom (the MBR partition was on /dev/sda1, hence the 1 in hd0,1):

menuentry "Windows"{
set root='(hd0,1)'
chainloader +1
}

3) Generate configuration AND persist the changes on boot (didn’t know I had to do the latter till more googling):

grub2-mkconfig -o /boot/grub2/grub.cfg

And I was back in action!

SpinAsserts with Selenium

6 Feb

I have been doing a lot of Selenium-related stuff at work lately. For something that seems as simple as a macro “click this, click that, type this, type that”; it is actually not unlike writing development code.

The first big conceptual leap I had was page objects (https://code.google.com/p/selenium/wiki/PageObjects), that basically treated selenium scripts as modular pieces of code rather than a giant list of browser actions. It made me really appreciate OOP in programming (something that I took for granted while coding with frameworks).

The next big conceptual leap I had was spin asserts (http://saucelabs.com/resources/selenium/lose-races-and-win-at-selenium). It is a common problem that the selenium driver is over-eager to do what is asked of it, and gleefully fails at the speed of light. As my company’s web application depended heavily on AJAX, I would see false negatives from tests all the time. It got to the point where it was standard behavior to put a wait before a click action.

$this->waitForElementPresent("css=#identifier");
$this->click("css=#idenfifier");

It irked me to no end that this was necessary for proper test procedure, yet felt like unnecessary code duplication every time. Eventually I had enough. It was time to automate the writing of code.

 

At first, I was inspired by Saucelabs’ own version of spinAssert, from their Sausage library. However, it still required a lot of boilerplate to be written (https://github.com/jlipps/sausage#spinasserts).

public function testSubmitComments() {
    $comment = "This is a very insightful comment.";
    $this->byId('comments')->click();
    $this->keys($comment);
    $this->byId('submit')->submit();
    $driver = $this;

    $comment_test = function() use ($comment, $driver) {
        return ($driver->byId('your_comments')->text() == "Your comments: $comment");
    };

    $this->spinAssert("Comment never showed up!", $comment_test);
}

Which seemed like a very convoluted modification of

$this->assertText('comment', 'My comment');

The basic idea was pretty simple – if any selenium command failed, simply retry it. I tried to hook it up to __call, but PHPUnit was smarter than me and used reflection to circumvent my magic method. I briefly toyed with the idea of hooking into PHPUnit’s internals via decorators or listeners, but not only was it undocumented, unsupported and bizarre (I had never encountered the design pattern in the wild before) to begin with, they did not do what I wanted. Decorators only work at the start and end of a test run, and listeners merely collect data. The last option left to me was a wrapper method.

I had much resistance against a wrapper method, as it was non-native and would be an additional learning curve over the standard Selenium functions, but short of modifying PHPUnit’s internals, that was the best I could do after days of research. Fortunately, it turned out to be pretty simple to code and not so hard to write.

This is what I had:

public function spinAssert() {
    $args = func_get_args();
    $command = array_shift($args);

    call_user_func_array(array($this, "waitFor" . $command), $args);
    call_user_func_array(array($this, "assert" . $command), $args);
}

and this was how I called it:

$this->spinAssert('Text', 'comment', 'This is a comment');

Thankfully, PHPUnit already provided implicit waits by virtue of the “waitFor” family of functions, which does the whole loop of send command, check for success, sleep(1); if not and retry. As such, it was pretty straightforward to implement.

A tale of two server providers

28 Dec

I like servers. They let me do all kinds of cool things. Serving websites, running game servers, proxying traffic, experimenting with the new technologies. But selecting a server for my usage, now that’s a mixed bag that I’m never satisfied with.

My very first server was a Linode, since they seemed to have a decent reputation for dedicated hosting. Of course, this was before the credit card leak fiasco they had. They offered a server in Tokyo, which is as close as a server’s location could get to Singapore.

Then AWS’ free tier offering got too enticing, and I moved to a micro EC2 instance. I was basically only running a minimal website at that point, and didn’t feel like paying $20/month at Linode was justifiable. I still got some bills though; an extra instance was running that I forgot to shut down, no thanks to AWS’ unfriendly interface that split instances by regions.

But when it took me an hour to compile node.js on it (required for an integration with an online code editor), I decided that my server needed more juice as well. A pauper’s serving of CPU cycles and RAM wouldn’t do!

Once again, I switched back to Linode – their price plans offered the most server horsepower for the amount paid, and AWS’ pricing structure was more complicated than I cared to calculate.

But now, I am thinking of switching YET again. As much as I love the raw power provided, my linode lacked several crucial features.

  1. Packer integration – according to this github issue, apparently Linode’s API does not lend itself to integration with Packer, an up-coming technology that I am interested in trying out.
  2. Docker support – not provisioned for natively, plus the kernels it provides do not have the appropriate flags enabled that are required for Docker to run. I could compile my own kernel, but do I really want to go that far?
  3. The CentOS distro that I started out with to begin with did not support hosting a Starbound server out of the box. I tried a workaround, to no avail when the workaround failed to account for the game’s constant updates due to it being in beta. Not so much my linode’s fault than my choice of distro and being stuck to a single server, but still.

Finally, given that the minimum cost of such a server is not cheap, I was disincentivized to try different server setups.

But then I had a paradigm shift.

Did I really have to limit myself to a single server? My initial rationale was cost concerns and that a single server would fulfil all my needs. Which is not the case now, because I can identify the following contrasting needs:

  1. Maintaining a low-upkeep website. Although not hosting anything of import, over time I have offered to host various acquaintances’ sites, to better use the excessive “free” CPU cycles/RAM consumption/bandwidth my server had after paying the fixed-fee cost up-front (or in AWS’ case, the free hours). Therefore it is important that I at least attempt to maintain 24/7 uptime.
  2. Intensive periods of experimentation. This involves trying out new technologies, implementing new applications and infrastructure. This would result in a lot of CPU/memory/bandwidth consumed, as the necessary packages get downloaded and installed.

At heart, these 2 needs are at odds with each other. One requires low horsepower and constant uptime, the other high horsepower and on-demand uptime. Which could be best served by a lowly server that is always up and a powerful server that is only paid for when utilized. The latter is an on-demand instance or spot instance.

AWS offers a long-term pricing plan for servers, known as reserved instances. A 1 year plan for a m1.small instance (1xvCPU, 1xECU, 1.7 GB RAM, 160GB Storage) comes out to $67, which is a little over 3 months worth of my current hosting plan, a Linode 1024 (8xCPU – 1 priority, 1GB RAM, 48GB Storage).

It would seem like I am locking myself into AWS for a year, but cost-wise it will only be for 3 months.

On-demand/spot instances are awesome. They perfectly fulfil my experimentation needs (also leeway for a more powerful server), and can be be stopped and started at will. In this case, it is no more different than having to boot up my vagrant instance on my desktop whenever, and only shut it down right after.

One final thorny issue would be data. In the process of migrating between these 2 providers back and forth, transferring data has always been a manually cumbersome process. SCP from old server to my desktop and back to new server, or rsync between both. In any case, it really breaks the workflow of my beautifully crafted Puppet manifests. Plus a lack of planning around them means that no backups are available. I hope to change that when I switch servers, but I have not mapped out how exactly, yet.

Symlinks with Vagrant + VirtualBox

26 Dec

This was a very thorny issue for me early on, back when I was trying to update DocHub, which was powered by npm modules. The static HTML files it came packaged with were sufficient, but I wanted the LATEST versions. npm install tries to put files locally and symlink them, which Vagrant made a huge boo-boo about when my console erupted with error messages from npm. I eventually gave up in favor of zeal, which is godly amounts of awesome for an offline documentation browser.

A year later or so, I had to get symlinks to work again – this time while I was trying puppet-rspec. For some arcane reason, it needs to symlink a directory back to the original directory containing the code to be tested. Instead of referencing the files relatively in the code. Of course. This time though, I had better luck – I chanced upon a fix that actually worked!

https://github.com/mitchellh/vagrant/issues/713#issuecomment-17296765

Below are the steps I took personally (cribbled from the above link, of course)

1) Added these lines to my config :
config.vm.provider “virtualbox” do |v|
v.customize [“setextradata”, :id, “VBoxInternal2/SharedFoldersEnableSymlinksCreate/vagrant”, “1”]
end
2) ran this command in an admin command prompt on windows, while in the C:\Windows\system32

 

fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1
3) open a new command line, vagrant halt if necessary followed by vagrant up

 

This solution really needs more love than being hidden away behind a github issue comment. So here it is!

Getting an SVN post-commit update hook to work

14 Oct

I’ve ALWAYS encountered some form of trouble while trying to set up a post-commit hook, and the list of prerequisites are seemingly endless when you’re trying one solution at a time off the google pages.

 

Therefore, here’s a list of things to check off!

  1. The post-commit file must be executable. ( chmod +x post-commit )
  2. The post-commit script should begin with the shell command. ( #!/bin/sh or #!/bin/bash )
  3. The path to the svn executable should be absolute. ( /usr/bin/svn update )
  4. If authentication is required, make sure username, password and non-interactive flags are set. ( –username USERNAME –password PASSWORD –non-interactive)
  5. Make sure the destination directory is writable by root.

Getting SFTP to work for a limited user in a Linode

7 Oct

First of all, if I wanted to have my own website, it wouldn’t make much sense for me to host my own server too. I’ve been living with cPanel on shared hosting and it does a pretty good job of automating various tasks such as creating subdomains, databases and such.

No, the reason I wanted to host my own server, besides having shell access which was crucial for source control as it was one of the first things I set up on my new server – was to host multiple sites. Not subdomains, but whole domains. In particular, deanishes.com, a site where it is planned that you can observe the inspired progress of Dean’s tasks. Now while it is possible for him to manage his site through the wordpress admin panel, in all fairness that is insufficient for editing code. You can edit the theme files through wordpress, and I’ve even thrown on a pretty code editor plugin but nothing beats the power of IDE software, with code-completion, auto-formatting and syntax highlighting.

Therefore, I needed a way to provide Dean with limited access to the server – just to access his wordpress installation files. Which brings me to the what this post is about – allowing SFTP access to the server but in a limited capability.

First of all, my original idea was a FTP server daemon which would handle a separate set of users and logins. My first step was to head to the Linode library, which is pretty good except for not fully documented parts. As it is, it turns out that SFTP is preferred over FTP because of security issues. I’m all for best practices and am pretty flexible, so I did what was right – I looked into SFTP access instead.

As per Linode’s reccomendations, I had shut down all ports except for allowed ones such as SVN, SSH and HTTP. So my next step was to open a port for FTP/SFTP access. As it turns out, because SFTP uses the same port as SSH (the two are the same, really), I didn’t need to configure my iptables firewall rules – though I did spend a great deal of time googling “iptables allow sftp” when my previous attempts didn’t work, but that was eventually chalked to a different problem (file permissions).

Now, SFTP uses unix accounts to connect, much like SSH. I was initially apprehensive of this, but then I thought about the ramifications – having ten users in my linux installation? Why not? I’m not going to be a general webhost, and one user per site seemed pretty reasonable to me. But the crux of the matter was that I needed to customise this user – it could not be allowed to access anything other than the directory I’ve specified for it.

Which led me to the concept of chroot SFTP, which basically is a SFTP jail that starts the user in a predefined directory without access to other things. All that is needed is that the directory be owned by root, but any subdirectories can be owned by the user. At first I was trying the Linode solution, but that did not pan out. For some reason, the

ChrootDirectory

directive in the Linode library specified %h, but the one in the other article reccomended /<some directory/%u.

The former did not work out, so I resorted to following the other article’s example to the letter.

And what I’ve discovered is this -%h defaulted to the root directory / regardless of the home directory I specified in /etc/passwd, and instead started from / and attempted to browse to the home directory specified – but of course failed because it didn’t have the permissions to traverse directories.

So when I specified the ChrootDirectory to /<some directory>/%u, AND the home directory to be /, It worked fine and started in / of /<some directory>/%u.

So what I can take away from this is that /etc/passwd specifies the home directory RELATIVE to whatever root directory was provided, and the ChrootDirectory provides the root directory for the jail. Now that I’ve got it all working, it’s time to celebrate this with Dean!

-EDIT- It is important that /<some directory> be owned by root, because the shell requires access to login. Funny, huh? the shell requiring access.