Tips and tricks for LAMP (Linux Apache MySQL PHP) developers.

When I cant find a suitable solution to a problem and have to work it out for myself I'll post the result here so hopefully others will find it useful. (All code is offered in good faith. It may not be the best solution, just a solution).

Monday, 2 September 2013

Elasticsearch, Chef and Vagrant

I've been tasked at work with setting up an Elasticsearch cluster. We use Chef for provisioning and there's an official cookbook available with some instructions but they pressume you are using Amazon EC2 which we are not - we're using our own servers and Vagrant VMs for testing - so I had to figure a few things out myself.

When I first added the recipe to the node's run list it all installed fine but then I found that Elasticsearch was not running. When I tried running it manually it just said "Killed" and exited. This had me scratching my head for quite a while but I finally found the solution.

In some of the official examples they include the following in the Chef node:

"elasticsearch": {
    "bootstrap.mlockall": true

It's not explained what this does but in the template config YAML file it says it prevents the JVM from using swap which causes Elasticsearch to perform badly. Fair enough, however, on a virtual machine that has very little memory it can mean that the JVM doesn't have enough memory to run so it crashes. True is the default value so it's not enough to simply not specify this config, you have to set it to false.

Once I got that working my first node had Elasticsearch running and all was well. Then I started up my second node but I couldn't get it to form a cluster with the first.

As per the documentation I had given them both the same cluster_name. Our servers are spread across different networks so I couldn't use the default multicast option for discovery so I added the FQDN's of each node to the unicast list:

"elasticsearch": {
    "": false,
    "": "[\"\", \"\"]"

Each node has a host entry for each other node and they could telnet to each other on the Elasticsearch discovery port (9300) just fine but when the second node started up I got an error like:

[node2[inet[/]] failed to send join request to master [node1], reason
[org.elasticsearch.transport.RemoteTransportException: [node2[inet[/]][discovery/zen/join]; 
org.elasticsearch.ElasticSearchIllegalStateException: Node [node2[inet[/]] not 
master for join request from [node2[inet[/]]

Huh? Why was node2 trying to connect to node2? It was my colleague that noticed the references to the 10.0.2.* IPs where we would've expected 192.168.33.* IPs. Turns out that Vagrant always sets the NAT adapter on eth0 and it was the IP of that that Elasticsearch was binding to by default. You can override with the config:

"elasticsearch": {
    "": ""

Once I'd done that for each node (with their respective IPs) the cluster started working.

Monday, 15 July 2013

MongoDB, replica sets and authentication

I've just recently had to set up a MongoDB replica set using Chef at work. It was also required that authentication be set up in MongoDB.

I had very little experience of either technology (but I would now recommend both) so it took me quite a while to figure it all out so I thought I'd offer some advice here for anyone else who may need it. Although I used Chef most of this stuff still applies even if you're doing it manually. I'll be presuming you know what replica sets are, if you don't but are interested please read up on them first.

Many set-ups are possible with replica sets but we opted for a three node set: primary, secondary and an arbiter.
It's also worth noting that I used Vagrant to test all this which I'd highly recommend. It did mean having three VM's running simultaneously which chewed up my RAM but it was worth it.

We used the edelight chef-mongodb cookbook as the starting point. I added recipe[mongodb::replicaset] to all three nodes' run lists. Then came my first stumbling block: by default this recipe will try to initiate the replica set from all three nodes but only the primary can do that so, for the other two nodes, you need to add the following attribute:

    "mongodb": {
        "auto_configure": {
            "replicaset": false

Then on all three nodes you need to add the following (this can be combined with the other mongodb attributes, it's just shown separately here for convenience):

    "mongodb": {
        "cluster_name": "ClusterName", 
        "replicaset_name": "ReplicasetName"

And finally on the arbiter node you need to add:

    "mongodb": {
        "arbiter": true

The next problem was that, at the time of writing, this cookbook does not support authentication. There is a pull request to add keyFile support (which is how you do authentication for replica sets) but it has not been actioned yet. We forked the repository and pulled in these changes ourselves. Once done you then need to generate suitable keyFile contents then specify the following attribute on all three nodes:

    "mongodb": {
        "key_file": "KeyFileContentGoesHere"

Non-Chef note:
If you're doing this manually you'll need to make sure you start mongo with the following arguments (Chef does this for you):

--replSet ReplicasetName --keyFile /path/to/keyFile

The next issue was that this cookbook has been written for Chef-Server and tries to search for the nodes of the replica set to initiate. Chef-Solo, however, does not support searching. Edelight offer a workaround for this with chef-solo-search. I didn't actually make use of this, we just hard-coded our list of nodes into some attributes but that's a bit hacky.

On to the next problem: the set members contact each other via their domain names. Not all of our servers had domain names set up so we needed to add hosts entries to each server in the set. In Chef we did this with CustomInk's Hostsfile cookbook.

The sequence in which you bring up the replica set nodes is somewhat important in that they must all be operational before you can initiate the set, therefore the primary has to come up last.
I provisioned the arbiter first (which we were already using for something else) and ensured that mongodb was running and that I hadn't broken anything else. I then provisioned our secondary and checked that mongo was running and that it could ping the domain name of the arbiter (which was in the hosts file) and vice-versa. I then provisioned the primary which initiated the replica set. After about a minute it was up and running.

Well... there was a little more to it than that.
We already had the 'primary' MongoDB server live but just as a standalone, we'd later decided to make it a replica set. We'd already set up authentication on the standalone and that's where things got complicated. In order to initiate a replica set on a MongoDB instance with authentication you must be authenticated as a user with the clusterAdmin role. If you don't you will simply get an “unauthenticated” error which, if you're like me, will make you tear your hair out as you're sure you're entering the right credentials.
The other issue is that the edelight cookbook does not handle authenticating before trying to initiate the replica set so we had to add this in. It's very simple, in libraries/mongodb.rb, in the configure_replicaset define, just before the command to initiate the replica set add:

admin.authenticate("username", "password")

As long as that user has the aforementioned clusterAdmin role you should be set.

Monday, 25 February 2013

SimpleXML and large files

I encountered an issue earlier today whilst trying to process a fairly large (~37MB) XML file through SimpleXML.

It worked just fine on mine and a colleague's development systems but failed with weird errors on the live server, for instance reporting there was "Extra content at the end of the document".

After about an hour of trying to figure it out we realised our dev systems were using libxml v2.6.x and the live server was using 2.7.6. We then read that somewhere between those versions some hard-coded limits were added in that can cause the problem we were seeing.

To get around it you need to specify the LIBXML_PARSEHUGE flag:

$simplexml = simplexml_load_file($file_path, 'SimpleXMLElement', LIBXML_PARSEHUGE);
$simplexml = new SimpleXMLElement($xml_string, LIBXML_PARSEHUGE);

Saturday, 9 February 2013

Documentation using ApiGen and Swagger UI

At work we're writing an API and an SDK that'll talk to that API. We hope to one day make this SDK available to third parties and so we wanted to ensure there was good documentation for it.

I suggested early on that the code itself should be where we write the documentation (in DocBlocks) and then generate external documentation from there. That way both the code and the documentation are complete and consistent with each other.

This suggested the use of something like PHPDocumentor but that's a little long-in-the-tooth and produces rather dry-looking documentation (in our opinion). ApiGen is similar to PHPDoc but is a bit more modern with support for namespaces, traits, etc but still just produces boring HTML by default.

My boss had encountered Swagger UI which is a collection of JavaScript and CSS files that render nice looking docs. The problem is that Swagger is designed specifically for RESTful APIs, not PHP SDKs.

I looked into it though and found at least a partial solution:
ApiGen allow you to write your own templates (PHPDoc also has this) and the input to Swagger UI is just a collection of JSON files and so we decided to write ApiGen templates that produced Swagger UI JSON.

Swagger UI needs an 'index' JSON file that defines what APIs are available and then a JSON file for each of those APIs. In our case we had a file per Class in the SDK. In each of the Class JSON files you then define the (public) methods.

The spec for Swagger UI JSON files is here:

ApiGen templates use Nette Framework Latte Templates:

We wrote an overview.latte file that loops all the classes and enumerates them and a class.latte file that loops all the methods of a class and details those.

I said this was a partial solution - we have got all this working nicely but Swagger UI still looks like it's describing a RESTful interface. For example: for every method we defined we had to say whether it was GET, PUT or POST which obviously isn't relevant. That said you can edit the Swagger JavaScript to do whatever you want so you can change things like the above.