Tips and tricks for LAMP (Linux Apache MySQL PHP) developers.

When I cant find a suitable solution to a problem and have to work it out for myself I'll post the result here so hopefully others will find it useful. (All code is offered in good faith. It may not be the best solution, just a solution).

Thursday 20 November 2014

SimpleXML and mixed, nested namespaces

I work with several third party marketplace APIs, one of which is Amazon's MWS API.
When processing their GetMatchingProductForId response I discovered that they sometimes namespace the XML nodes (with ns2).

PHP's SimpleXML doesn't allow access to namespaced nodes via the usual interface of $simpleXml->node but I found you can specify a namespace to the children() method like so:

$simpleXml
    ->children('ns2', true)
    ->namespacedNode

That works great but then I hit a problem. As I mentioned, Amazon namespace some of the nodes but not all and the node I was really after was not namespaced itself but was a child of a namespaced node. After I applied the code above I found that I couldn't access the child node:

$simpleXml
    ->children('ns2', true)
    ->namespacedNode
    ->nonNamespacedNode

I thought all was lost until I discovered you can reset SimpleXML's namespace filter back to none again with another call to children() so you can get at the non-namespaced child node:

$simpleXml
    ->children('ns2', true)
    ->namespacedNode
    ->children()
    ->nonNamespacedNode

Sunday 29 June 2014

Jasmine and Jasq

Recently I've started using Jasmine - a BDD framework - to test some pretty complex JavaScript I've been writing for work. We're using RequireJS for our JavaScript and we found a great plugin for Jasmine to make it play nicely with AMD modules called Jasq.

All works beautifully except for one thing: the Jasq docs say to wrap your specifications in a require block but that didn't seem to work for me. Once I changed them to define blocks (see below) everything started working. I don't know why this is and it doesn't seem anyone else had the same issue so perhaps I've done something wrong but I just thought I'd put this out there in case anyone else runs into the same issue.

define(['jasq'], function ()
{
  describe('My Tests', 'thing/to/test', function()
  {
    it('should do something', function(thingToTest)
    {
        //...
    });
  });
});

Monday 2 September 2013

Elasticsearch, Chef and Vagrant

I've been tasked at work with setting up an Elasticsearch cluster. We use Chef for provisioning and there's an official cookbook available with some instructions but they pressume you are using Amazon EC2 which we are not - we're using our own servers and Vagrant VMs for testing - so I had to figure a few things out myself.

When I first added the recipe to the node's run list it all installed fine but then I found that Elasticsearch was not running. When I tried running it manually it just said "Killed" and exited. This had me scratching my head for quite a while but I finally found the solution.

In some of the official examples they include the following in the Chef node:

"elasticsearch": {
    "bootstrap.mlockall": true
}

It's not explained what this does but in the template config YAML file it says it prevents the JVM from using swap which causes Elasticsearch to perform badly. Fair enough, however, on a virtual machine that has very little memory it can mean that the JVM doesn't have enough memory to run so it crashes. True is the default value so it's not enough to simply not specify this config, you have to set it to false.

Once I got that working my first node had Elasticsearch running and all was well. Then I started up my second node but I couldn't get it to form a cluster with the first.

As per the documentation I had given them both the same cluster_name. Our servers are spread across different networks so I couldn't use the default multicast option for discovery so I added the FQDN's of each node to the unicast list:

"elasticsearch": {
    "discovery.zen.ping.multicast.enabled": false,
    "discovery.zen.ping.unicast.hosts": "[\"node1.example.com\", \"node2.example.com\"]"
}

Each node has a host entry for each other node and they could telnet to each other on the Elasticsearch discovery port (9300) just fine but when the second node started up I got an error like:

[node2[inet[/ 
10.0.2.2:9300]] failed to send join request to master [node1], reason
[org.elasticsearch.transport.RemoteTransportException: [node2[inet[/ 
10.0.2.2:9300]][discovery/zen/join]; 
org.elasticsearch.ElasticSearchIllegalStateException: Node [node2[inet[/ 
10.0.2.2:9300]] not 
master for join request from [node2[inet[/ 
10.0.2.2:9300]]

Huh? Why was node2 trying to connect to node2? It was my colleague that noticed the references to the 10.0.2.* IPs where we would've expected 192.168.33.* IPs. Turns out that Vagrant always sets the NAT adapter on eth0 and it was the IP of that that Elasticsearch was binding to by default. You can override with the network.host config:

"elasticsearch": {
    "network.host": "192.168.33.1"
}

Once I'd done that for each node (with their respective IPs) the cluster started working.