John D'Emic's blog about programming, integration, system administration, etc...
Sunday, December 12, 2010
MongoDB Transport 3.0.1.1 Released
I just released version 3.0.1.1 of the MongoDB transport. This version populates an "objectId" outbound property, corresponding to the "_id" of the saved document or file, to messages sent to a collection or bucket. You can reference this property when chaining mongodb endpoints in a chaining-router or flow. This is primarily useful in creating subsequent DBRef's in the processing chain.
Friday, November 19, 2010
MongoDB Transport 3.0.1.0 Released
Just released version 3.0.1.0 of the MongoDB transport with, not surprisingly, full support for Mule 3.0.1. As usual details are on the Muleforge page.
Wednesday, September 29, 2010
Annotations in Mule 3
The official Mule 3 release was a couple of weeks ago, Ross covers a lot of the new features here. I've been using the release candidates for about two months in a few contexts. The MongoDB transport and SAAJ module have SNAPSHOT support for 3.x. David and I are working on upgrading the book examples over to Mule 3. I'm additionally using Mule 3 on a new project. We started on RC2 and are readying for an alpha release next week.
The @Schedule annotation invokes the "runCommand" method every 5 seconds. Note that there is a Map being passed as the argument to runCommand(). This is annotated with @OutboundHeaders, indicating the contents of this map will be available as message properties on the outbound endpoint. We'll use this to set the COMMAND_TIME_STAMP header to contain the timestamp the command was run. That's it from the Java side. Let's see how this is wired up in the XML config:
Using the @Schedule annotation eliminates the need for explicitly configuring an inbound-endpoint, which is missing from the above. We also don't need to jump through hoops with the MuleMessage in either the service class or with message-properties-transformers to use the COMMAND_TIME_STAMP to name the FTP file.
So, to sum it up, I've spent a lot of time recently with Mule 3. In doing so, I came to appreciate one of the nice "themes" of Mule 3.0: the reduction of configuration noise. Annotation support is a feature that particularly reinforces this. I refactored a couple of services that were using a Quartz inbound-endpoint to instead use the new @Schedule annotation. Here's what I did, hopefully it will illustrate how the XML configuration is cut down by some of the new annotations.
The service in question was responsible for executing a configurable command on the system and sending the output to an FTP server. An additional requirement is that the filename on the FTP server needs to have the timestamp of the command execution embedded in it.
Here's how I refactored the implementation to use Mule 3 annotations. First off, here's the service class:
/**
* Simple facility to run a system command and return the output as a String.
*/
public class CommandRunner {
String command;
@Schedule(interval = 5000)
public String runCommand(@OutboundHeaders Map<String, Object> outHeaders) throws Exception {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
PumpStreamHandler streamHandler = new PumpStreamHandler(outputStream, System.out);
CommandLine commandLine = CommandLine.parse(command);
DefaultExecutor executor = new DefaultExecutor();
executor.setStreamHandler(streamHandler);
int exitValue = executor.execute(commandLine);
if (exitValue < 0) {
throw new RuntimeException("Error running: " + command);
}
outputStream.close();
outHeaders.put("COMMAND_TIME_STAMP",new Date().getTime());
return new String(outputStream.toByteArray());
}
public void setCommand(String command) {
this.command = command;
}
}
The @Schedule annotation invokes the "runCommand" method every 5 seconds. Note that there is a Map being passed as the argument to runCommand(). This is annotated with @OutboundHeaders, indicating the contents of this map will be available as message properties on the outbound endpoint. We'll use this to set the COMMAND_TIME_STAMP header to contain the timestamp the command was run. That's it from the Java side. Let's see how this is wired up in the XML config:
<model name="Integration Services">
<service name="CLI to FTP Service">
<component>
<spring-object bean="commandRunner"/>
</component>
<outbound>
<pass-through-router>
<ftp:outbound-endpoint user="${ftp.user}"
password="${ftp.password}"
host="${ftp.host}"
port="${ftp.port}"
path="${ftp.path}"
outputPattern="#[header:COMMAND_TIME_STAMP].txt">
</ftp:outbound-endpoint>
</pass-through-router>
</outbound>
</service>
</model>
Using the @Schedule annotation eliminates the need for explicitly configuring an inbound-endpoint, which is missing from the above. We also don't need to jump through hoops with the MuleMessage in either the service class or with message-properties-transformers to use the COMMAND_TIME_STAMP to name the FTP file.
As I mentioned in my diatribe about implementing component classes, I prefer to avoid implementing Callable on my components. I find doing so couples the code too tightly to the Mule runtime. Using the annotations allows you to implement functionality previously required by getting at the MuleContext without extending or implementing a Mule class. This is a best-of-both-worlds scenario. Your component code can still stay decoupled from Mule, to ease unit testing, while indirectly giving you access to common pieces of the MuleContext (ie, the message payload, inbound and outbound headers, etc.)
So the annotation support, for me, is a welcome addition to the available configuration options. In addition to reducing XML noise it also streamlines integrating POJO's with the message flow.
The improved jBPM support on Mule 3 is another feature I'm excited about. For one reason or another, jBPM has been a constant in practically every Mule project I've worked on in the last year. I'll cover that in the next post.
Tuesday, August 10, 2010
Mule 3.x Support for the SAAJ Module
I just committed Mule 3.x support for the SAAJ Module. As with the MongoDB transport 3.x upgrade, this isn't in Maven. You'll need to build it from the SVN branch here.
Mule 3.x Support for the MongoDB Transport
I've just created a branch of the MongoDB transport with support for Mule 3.0.0-M4. This isn't in in the Maven repo yet, but you can build and install locally for yourself from here.
Thursday, August 5, 2010
MongoDB Transport 2.2.1.5 Released
I just pushed version 2.2.1.5 of the MongoDB transport to Muleforge. This release introduces two transformers, mongodb:db-file-to-byte-array and mongodb:db-file-to-input-stream, that simplify getting at the contents of GridFSDBFiles.
Thursday, July 22, 2010
MongoDB Transport 2.2.1.4 Released
I just released version 2.2.1.4 of the MongoDB transport. New features include:
- Full support for GridFS.
- Support for sub collections (ie, "foo.sub1", "foo.sub2", etc.)
- Synchronous invocation now returns the Map as returned from the insert() call. This allows you to access the "_id" and "_ns" keys after persisting an object. Something particularly useful in conjunction with a chaining-router.
I additionally fixed a few minor bugs. The MuleForge project page is here with updated documentation.
- Full support for GridFS.
- Support for sub collections (ie, "foo.sub1", "foo.sub2", etc.)
- Synchronous invocation now returns the Map as returned from the insert() call. This allows you to access the "_id" and "_ns" keys after persisting an object. Something particularly useful in conjunction with a chaining-router.
I additionally fixed a few minor bugs. The MuleForge project page is here with updated documentation.
Wednesday, June 30, 2010
MongoDB Transport
Friday, May 7, 2010
jmxsh
I recently discovered jmxsh, which provides TCL scripting facilities to interact with JMX. Its pretty impressive when compared to something like jconsole. For instance, I was able to write the following little script to query HornetQ for the amount of messages on a queue:
I can then run the script like this and it prints the amount of messages on the DLQ to stdout:
While the use of TCL might seem obtuse (ie, why not Groovy?), it makes sense from the standpoint of a sysadmin. The JMX agnostic language allows them to script against an app's MBeans with minimal exposure to Java or the JMX API's.
set host [lindex $argv 0]
set port [lindex $argv 1]
set queue [lindex $argv 2]
jmx_connect -h $host -p $port
set message_count [jmx_get -m org.hornetq:module=JMS,name="$queue",type=Queue MessageCount]
puts "$message_count"
jmx_close
I can then run the script like this and it prints the amount of messages on the DLQ to stdout:
./jmxsh queue_count.tcl localhost 3000 DLQ
While the use of TCL might seem obtuse (ie, why not Groovy?), it makes sense from the standpoint of a sysadmin. The JMX agnostic language allows them to script against an app's MBeans with minimal exposure to Java or the JMX API's.
Sunday, May 2, 2010
Component Implementation Guidelines
I've recently worked on a couple Mule ESB projects that make heavy use of components. The projects were similar in the sense that the component logic was implemented from scratch. We were not, for instance, exposing existing Java classes out over a transport.
It was challenging at times to implement these components while maintaining best-practices with decoupling the code from Mule and unit-testing it. The below are some approaches / guidelines when implementing components that keep these goals in mind:
- Be careful when implementing Callable. Implementing the Callable interface gives you access to the MuleEventContext when the "onCall" method is invoked. This allows you to invoke the endpoint's transformers, access the MuleMessage directly, stop message processing, etc. It also tightly couples your component code to the Mule infrastructure, making the component more difficult to unit-test in isolation as well as refactor if the Mule API changes. See if your component's reliance on the MuleEventContext can be refactored out to transformers of exception-strategies.
- Favor using a canonical data model over interacting with MuleMessage directly. You can use a transformer to move the external data into a common format, like a domain model class, an XML document or a hash-map. This again decouples your code from the Mule API, making unit-testing, refactoring, etc easier.
- Hide MuleClient usage behind a service interface. MuleClient is an extremely useful facility to send and receive messages over arbitrary transports. Its usage introduces the same difficulties as the items above. This can be mitigated by abstracting the MuleClient invocations into a service class that can be injected into the component. A mock implementation of this class can then be used in unit tests.
- Consider using jBPM to orchestrate activity and maintain state. Component code, along with MuleClient, can be a tempting place to orchestrate, compose and maintain state across endpoint invocations. But be careful when considering this approach. You'll need to consider what happens if the component logic is interrupted during execution, if the state needs to be transient across Mule restarts, if the state needs to be transient between Mule nodes, etc. Many, if not most, of these issues can be addressed by using jBPM in conjunction with Mule's BPM transport.
- Avoid using hardcoded strings in endpoint names with MuleClient. This is obvious but is worth mentioning. Try to centralize all the endpoint names in one place (in an enum, for instance) and inject this into your service facades. This helps avoid issues with typos in endpoint addresses.
Hopefully some of the above will give you component implementation / configurations that are easy to unit-test and refactor. Any other tips / guidelines are welcomed in the comments :)
Monday, January 25, 2010
Mule and JMS as Decoupling Middleware
I just finished reading Michael Nygard's excellent "Release It!". His war stories echo a lot of my own experience, particularly when I was doing ops work full-time. Its a definite must read for developers and system administrators alike.
In Chapter 5 Michael describes the "Decoupling Middleware" pattern. He suggests using a messaging broker to decouple remote service invocation. This allows you to leverage the features of the messaging broker, like durability and delayed re-delivery, to improve the resiliency of communication with a remote service.
The following demonstrates how to use Mule and JMS to decouple interaction with Twitter's API. "Tweet Service"
transactionally accepts messages from a JMS queue and submits them to Twitter's REST API:
Let's consider some of the benefits of this indirection:
The last point is particularly important. Its common to encounter transient errors when integrating with remote services, particularly web-based ones. These errors are usually recoverable after a certain amount of time. If your JMS broker supports it, you can use delayed redelivery to periodically re-attempt the request. HornetQ supports this by configuring address-settings on the queue. The following address-settings for the "tweets" queue specifies 12 redelivery attempts with a 5 minute interval between each request:
Its incidentally easy to employ a "circuit-breaker", another one of Michael's patterns, into the above using an exception-strategy. I'll demonstrate that in another post.
In Chapter 5 Michael describes the "Decoupling Middleware" pattern. He suggests using a messaging broker to decouple remote service invocation. This allows you to leverage the features of the messaging broker, like durability and delayed re-delivery, to improve the resiliency of communication with a remote service.
The following demonstrates how to use Mule and JMS to decouple interaction with Twitter's API. "Tweet Service"
transactionally accepts messages from a JMS queue and submits them to Twitter's REST API:
<model name="Twitter Services">
<service name="Tweet Service">
<inbound>
<jms:inbound-endpoint queue="tweets">
<jms:transaction action="BEGIN_OR_JOIN"/>
</jms:inbound-endpoint>
</inbound>
<http:rest-service-component
httpMethod="POST"
serviceUrl="http://user:password@twitter.com/statuses/update.json">
<http:requiredParameter key="status" value="#[payload:]"/>
</http:rest-service-component>
</service>
</model>
Let's consider some of the benefits of this indirection:
- The service can be taken down / brought up without fear of losing messages - they will queue on the broker and be delivered when the service is brought back up.
- The service can be scaled horizontally simply by adding additional instances (competing consumption off the queue)
- Messages can be re-ordered based on priority and sent to the remote service
- Requests can be re-submitted in the event of a failure.
The last point is particularly important. Its common to encounter transient errors when integrating with remote services, particularly web-based ones. These errors are usually recoverable after a certain amount of time. If your JMS broker supports it, you can use delayed redelivery to periodically re-attempt the request. HornetQ supports this by configuring address-settings on the queue. The following address-settings for the "tweets" queue specifies 12 redelivery attempts with a 5 minute interval between each request:
<address-setting match="jms.queue.tweets">
<max-delivery-attempts>12</max-delivery-attempts>
<redelivery-delay>300000</redelivery-delay>
</address-setting>
Its incidentally easy to employ a "circuit-breaker", another one of Michael's patterns, into the above using an exception-strategy. I'll demonstrate that in another post.
Subscribe to:
Posts (Atom)