John D'Emic's blog about programming, integration, system administration, etc...

Thursday, January 15, 2009

OpenMQ, Second Thoughts

 OpenMQ was fairly painless to get going with Mule.  I opted to set it up in our staging environment as a conventional cluster with 2 nodes. In this scenario, clients can maintain a list of brokers to connect to in the event one of them fails.  Loadbalancing between brokers might also be supported, but I haven't gone too far down that rabbit hole yet.  

Nor have I gone down the rabbit hole of HA clusters, which support failover of message data between brokers but require a shared database.  Amongst other things, we're using JMS to distribute monitoring alerts.  To do HA in production in a sensible manner, we'd need to back OpenMQ against our production MySQL cluster.  Since we're sending monitoring data about the same production MySQL cluster over JMS,  if the MySQL cluster failed we'd never hear about it.  We're not (currently) using JMS for any sort of financial or mission-critical data so losing a few messages in the event of a failover isn't too big of a deal for us as long as its a rare occurrence.  

Getting clients to connect to OpenMQ was a little more painful. It manages all of its "objects" (connection factories, queues, topics, etc) in JNDI.  We don't really use any sort of distributed JNDI infrastructure, so I started off using the filesystem JNDI context supplied with OpenMQ. In this scenario, all your OpenMQ objects are stored in a hidden file in a directory.  This works fine in development or testing situations when your clients and broker all have access to the directory.  Its obviously not an option for production, unless you do something painful like make tarballs of the JNDI filesystem directory and scp them or around or export it over NFS.    

According to the documentation, the "right" way seems to be by using an LDAP directory context to store the JNDI data (someone please correct me if I'm wrong about this.)  In this case, you store your OpenMQ objects to LDAP.  Each client then loads the appropriate connection factory, queus, etc from LDAP.  This is nice in the sense that configuration data for the connections (broker lists, etc) are maintained outside of the clients.  Presumably this allows you to add brokers to a cluster,  etc w/o having to restart your JMS clients.  

Despite the bit of complexity, this again was pretty straightforward. I just needed an LDAP directory to store the JNDI data in.  It (briefly) occurred to me to use our Active Directory deployment.  My assumption was, however, that this would involve modifying Active Directory's schema which I've never done before and heard nightmare stories about(it would also involve making changes to the production AD deployment - which is treated akin to a live hand grenade in the company.)

I ultimately opted to use OpenLDAP.  This was painless.  The only thing I had to do was include the supplied java.schema in the the slapd.conf and restart the service.  A short while later I was able to get Mule and JMeter sending messages through it.  The OpenMQ command line stuff worked great while doing some preliminary load testing.  The queue metrics in particular were really nice - it lets you watch queue statistics the same way you'd watch memory statistics with vmstat or disk statistics with iostat.  I am pretty impressed so far...

10 comments:

Unknown said...

Can you post the config files?

johndemic said...

Hey Graham,

Which configs do you need?

Unknown said...

The mule config.

Unknown said...

Hi John,

It's trivial to connect to OpenMQ from Mule - just specify a custom connection factory for the JMS connector.

Overall I liked OpenMQ, but 1 thing just killed me - they don't provide JMSXDeliveryCount header along the JMSRedelivered, and _document_ that it's for DLQ case only. Stupid, as every other broker in existence does support it. It's an 'optional' property, though the optionality is really moot in the spec. And given that Sun championed the JMS spec, it's a shame to still not have these.

Probably 95% of users will not care, but for the other 5% it was a real wall of fire.

Anonymous said...

I was interested to see that you had successfully used jmeter in conjunction with openmq.

Would it be possible to see your jmeter test plan as I seem to have problem convincing jmeter to pick up the QueueConnectionFactory correctly. Error is "QueueConnectionFactory expected, but got javax.naming.Reference" which puts it in jmeter's threadStarted() method.

I have small java code examples working using LDAP+OpenMQ but jmeter has me stumped and I can't find another example of this working even after some extensive Google searching.

Care to help?

thanks
paul

johndemic said...

Hey Paul,

Do you have imq.jar on jmeter's CLASSPATH?

Anonymous said...

that was indeed the problem.

can i ask which jmeter version you tried?

with 2.3.2 I now get slightly further but run into problems with the correlation ID. It was suggested elsewhere to try a nightly build which has an optional flag in the gui. I tried the latest (r753086), but that tells me the credentials are incorrect. However the exact same test plan loaded back into 2.3.2 with the comms style changed to RequestOnly will deliver the message.

johndemic said...

Hey Paul,

I think I'm seeing the same issue with the correlationId when using the response queue. The test runs fine when the response queue is not set however. I'm going to dig a bit further, if I find anything useful I'll let you know.

2009/04/23 13:23:56 WARN - jmeter.protocol.jms.sampler.Receiver: Received message with correlation id null. Discarding message ...

Rohan said...

JMeter was broken by a "fix". I've asked for https://issues.apache.org/bugzilla/show_bug.cgi?id=46142 to be backed out.

ambi said...

Hi,

What are the configurations required at mule client end to automatically connect to other brokers(open MQ) in a HA cluster?