tag:blogger.com,1999:blog-31996406391082113892024-03-08T08:04:55.622-05:00/dev/zeroJohn D'Emic's blog about programming, integration, system administration, etc...johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-3199640639108211389.post-31112252151687379282011-07-28T17:19:00.003-04:002011-07-28T22:16:21.635-04:00Generating Roo AspectJ ITD's from MavenI've been messing around with Spring Roo the last couple of months and something that continually bothered me was the seeming inability to generate the ITD and other Roo artifacts from Maven. This typically means you need to check the artifacts in, which is particularly annoying in the case of ITD's. I found a workaround, however, using the Maven antrun plugin. Here's what I added to my pom.xml to get it to work:<br /><br /><pre name="code" class="xml"><br /><build><br/> <plugins><br/> <plugin><br/> <artifactId>maven-antrun-plugin</artifactId><br/> <executions><br/> <execution><br/> <id>compile</id><br/> <phase>validate</phase><br/> <configuration><br/> <tasks><br/> <exec executable="roo"><br/> <arg value="quit"/><br/> </exec><br/> </tasks><br/> </configuration><br/> <goals><br/> <goal>run</goal><br/> </goals><br/> </execution><br/> </executions><br/> </plugin><br/>...<br /></pre><br /><br />The above has the antrun plugin run roo on Maven's validation phase with the quit command. This causes Roo to start, generate missing artifacts and quit immediately. Since the antrun task doesn't fork Maven will wait until it exits prior to compiling and all should be well. This hopefully will help someone out.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-22976746485398614612011-06-15T14:38:00.004-04:002011-06-15T14:45:39.997-04:00MongoDB Transport 3.1.2.0 ReleasedI released version 3.1.2.0 of the MongoDB transport today. This rev's the transport for version 3.1.2 of Mule and also introduces contentType support for GridFS endpoints (thanks <a href="http://twitter.com/#!/lodakai">Christer</a>!)johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-72218263143963268022011-05-16T10:31:00.003-04:002011-05-16T10:48:04.859-04:00MongoDB Transport 3.1.0.4 ReleasedI just released version 3.1.0.4 of the MongoDB transport. The connector configuration now requires MongoURIs to configure the hosts and/or replica sets to connect to (and optionally, the database.) You'll need to remove the hostname and port attributes in your current connectors and replace them with the uri attribute to upgrade.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-75947899474355683112011-04-15T00:03:00.001-04:002011-04-15T00:04:38.039-04:00Version 3.1.1 of the SAAJ Module ReleasedI just released a Mule 3.1.1 compatibile version of the SAAJ module. Documentation is on <a href="http://www.mulesoft.org/documentation/display/SAAJ/Home">MuleForge</a>.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-43307895726681197792011-03-25T12:10:00.004-04:002011-03-25T12:32:57.157-04:00Fear, Loathing and Windows Server 2008Let me preface by stating I loathe and despise Windows. Excluding the desktop product from this rant, I had the unfortunate experience of dealing with their server product (in the forms of IIS, Sharepoint and AD) over a two year period. I've been lucky enough, since then, to avoid dealing with Windows directly. That is, however, until this week. As such, in an effort to save others pain here are some things I learned:<br /><br /><ul><br /><li>The Java administrative tools (jstack, jmap, etc) don't work over RDP if you're connecting to a Java process running as a service. You instead get this helpful error: "Not enough storage is available to process this command". The solution is to download <a href="http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx">"PsExec</a>" then preface the java commands with "PsExec -s".</li><br /><li>When running a 32-bit JVM, <b>lowering</b> the max amount of heap helps avoid this exception: "java.lang.OutOfMemoryError: unable to create new native thread". Details are <a href="http://blog.egilh.com/2006/06/2811aspx.html">here</a>.</li><br /><li>There appear to be issues using NIO with Windows 2008 under concurrent load. If you're using the Apache Commons FileUtils stuff and are seeing weird NIO exceptions try using the Guava equivalents instead. The latter doesn't appear to use NIO under the covers. </li><br /></ul>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-69636523560379742752011-03-21T10:06:00.002-04:002011-03-21T10:08:58.733-04:00MongoDB Transport 3.1.0.2 ReleasedI released version 3.1.0.2 of the MongoDB transport. This release, thanks to Craig Skinfill at <a href="http://shopopensky.com">OpenSky</a>, introduces the "gridfs-file" message processor. gridfs-file allows you to load a single file from GridFS using either the payload of the message or a query. Documentation is available on <a href="http://www.mulesoft.org/documentation/display/MONGODB/Home">MuleForge</a>.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-40142265880326544732011-02-09T17:13:00.003-05:002011-02-09T17:21:56.596-05:00MongoDB Transport 3.1.0.1 ReleasedThis blog is quickly becoming a release log for the MongoDB transport - I hope to have some content up soon. Until then I just released version 3.1.0.1 of the transport. Special thanks to Craig Skinfill and Steve Surowiec at <a href="http://shopopensky.com">OpenSky</a> for suggesting and helping me test the fixed / new features. Here's some of what's new and improved:<br /><br /><ul><br /><li>Query support on outbound-endpoints</li><br /><li>Misc upsert fixes</li><br /><li>Support for queries on updates</li><br /><li>Expression support added for queries</li><br /></ul><br /><br />I don't like and didn't anticipate how complex behavior would become with the outbound-endpoints. As such I'm working on having the outbound-endpoints mirror the JDBC transport more closely, with nested configuration elements, rather then message properties, dictating the behavior of the endpoint.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-10812649774464507282011-01-19T12:57:00.006-05:002011-02-09T16:53:51.301-05:00MongoDB Transport 3.1.0.0 ReleasedI just released a new version of the Mule MongoDB transport. This is a major release that fixes a few bugs and introduces some new features. Thanks to <a href="http://www.dossot.net/">David</a> for suggesting and testing a lot of the new stuff.<br /><br />Here's a list of what has changed:<br /><ul><br /><li>Mule 3.1 support</li><br /><li>Support for replica sets</li><br /><li>"dispatch_mode" support to specify how messages are dispatched to Mongo. This allows you to specify "insert","update" or "delete" on the outbound message to control how the payload impacts Mongo.</li><br /><li>A query parameter for updates has been added, allowing you to specify what documents will be updated.</li><br /><li>upsert and multi support is also in place for updates.</li><br /><li>Dependencies are now properly scoped.</li><br /><li>WriteConcerns are now supported.</li><br /><li>Deletion is now supported via dispatch_mode.</li><br /><li>Global endpoints are now fully supported.</li><br /><li>Flows are now fully supported.</li><br /><br />As usual full documentation is available on <a href="http://www.mulesoft.org/documentation/display/MONGODB/Home">MuleForge</a>.<br /><br /></ul>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-8213836047703377942010-12-12T20:45:00.003-05:002010-12-12T20:55:49.766-05:00MongoDB Transport 3.0.1.1 ReleasedI just released version 3.0.1.1 of the MongoDB transport. This version populates an "objectId" outbound property, corresponding to the "_id" of the saved document or file, to messages sent to a collection or bucket. You can reference this property when chaining mongodb endpoints in a chaining-router or flow. This is primarily useful in creating subsequent DBRef's in the processing chain.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-85217017525868760762010-11-19T14:51:00.002-05:002010-11-19T14:53:36.794-05:00MongoDB Transport 3.0.1.0 ReleasedJust released version 3.0.1.0 of the MongoDB transport with, not surprisingly, full support for Mule 3.0.1. As usual details are on the <a href="http://www.mulesoft.org/documentation/display/MONGODB/Home">Muleforge</a> page.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-16157463194030431612010-09-29T10:35:00.007-04:002010-09-29T12:28:56.634-04:00Annotations in Mule 3The official Mule 3 release was a couple of weeks ago, Ross covers a lot of the new features <a href="http://blogs.mulesoft.org/say-hello-to-mule-3/">here</a>. I've been using the release candidates for about two months in a few contexts. The <a href="http://johndemic.blogspot.com/2010/08/mule-3x-support-for-mongodb-transport.html">MongoDB transport</a> and <a href="http://johndemic.blogspot.com/2010/08/mule-3x-support-for-saaj-module.html">SAAJ module</a> have SNAPSHOT support for 3.x. <a href="http://blog.dossot.net/">David</a> and I are working on upgrading the <a href="http://www.manning.com/dossot/">book</a> examples over to Mule 3. I'm additionally using Mule 3 on a new project. We started on RC2 and are readying for an alpha release next week.<div><br /></div><div>So, to sum it up, I've spent a lot of time recently with Mule 3. In doing so, I came to appreciate one of the nice "themes" of Mule 3.0: the reduction of configuration noise. <a href="http://www.mulesoft.org/documentation/display/MULE3USER/Annotations#Annotations-AnnotationsReference">Annotation</a> support is a feature that particularly reinforces this. I refactored a couple of services that were using a Quartz inbound-endpoint to instead use the new @Schedule annotation. Here's what I did, hopefully it will illustrate how the XML configuration is cut down by some of the new annotations.</div><div><br /></div><div> The service in question was responsible for executing a configurable command on the system and sending the output to an FTP server. An additional requirement is that the filename on the FTP server needs to have the timestamp of the command execution embedded in it.</div><div><br /></div><div>Here's how I refactored the implementation to use Mule 3 annotations. First off, here's the service class:</div><br /><pre name="code" class="java"><br />/**<br />* Simple facility to run a system command and return the output as a String.<br />*/<br />public class CommandRunner {<br /><br /> String command;<br /><br /> @Schedule(interval = 5000)<br /> public String runCommand(@OutboundHeaders Map<String, Object> outHeaders) throws Exception {<br /><br /> ByteArrayOutputStream outputStream = new ByteArrayOutputStream();<br /> PumpStreamHandler streamHandler = new PumpStreamHandler(outputStream, System.out);<br /><br /> CommandLine commandLine = CommandLine.parse(command);<br /> DefaultExecutor executor = new DefaultExecutor();<br /> executor.setStreamHandler(streamHandler);<br /> int exitValue = executor.execute(commandLine); <br /><br /> if (exitValue < 0) {<br /> throw new RuntimeException("Error running: " + command);<br /> }<br /><br /> outputStream.close();<br /> outHeaders.put("COMMAND_TIME_STAMP",new Date().getTime());<br /><br /> return new String(outputStream.toByteArray());<br /> }<br /><br /> public void setCommand(String command) {<br /> this.command = command;<br /> }<br /><br />}<br /><br /></pre><br /><div><br />The @Schedule annotation invokes the "runCommand" method every 5 seconds. Note that there is a Map being passed as the argument to runCommand(). This is annotated with @OutboundHeaders, indicating the contents of this map will be available as message properties on the outbound endpoint. We'll use this to set the COMMAND_TIME_STAMP header to contain the timestamp the command was run. That's it from the Java side. Let's see how this is wired up in the XML config:</div><br /><pre name="code" class="xml"><br /><model name="Integration Services"><br /> <service name="CLI to FTP Service"><br /> <component><br /> <spring-object bean="commandRunner"/><br /> </component><br /> <outbound><br /><br /> <pass-through-router><br /> <ftp:outbound-endpoint user="${ftp.user}"<br /> password="${ftp.password}"<br /> host="${ftp.host}"<br /> port="${ftp.port}"<br /> path="${ftp.path}"<br /> outputPattern="#[header:COMMAND_TIME_STAMP].txt"><br /> </ftp:outbound-endpoint><br /> </pass-through-router><br /> </outbound><br /> </service><br /></model><br /></pre><br /><div><br /><br /><br />Using the @Schedule annotation eliminates the need for explicitly configuring an inbound-endpoint, which is missing from the above. We also don't need to jump through hoops with the MuleMessage in either the service class or with message-properties-transformers to use the COMMAND_TIME_STAMP to name the FTP file.<br /><br /></div><div>As I mentioned in my <a href="http://johndemic.blogspot.com/2010/05/component-implementation-guidelines.html">diatribe</a> about implementing component classes, I prefer to avoid implementing <a href="http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/lifecycle/Callable.html">Callable</a> on my components. I find doing so couples the code too tightly to the Mule runtime. Using the annotations allows you to implement functionality previously required by getting at the MuleContext without extending or implementing a Mule class. This is a best-of-both-worlds scenario. Your component code can still stay decoupled from Mule, to ease unit testing, while indirectly giving you access to common pieces of the MuleContext (ie, the message payload, inbound and outbound headers, etc.)</div><div><br /></div><div>So the annotation support, for me, is a welcome addition to the available configuration options. In addition to reducing XML noise it also streamlines integrating POJO's with the message flow.</div><div><br /></div><div>The improved <a href="http://www.mulesoft.org/documentation/display/MULE3USER/Improved+Integration+with+JBoss+jBPM">jBPM</a> support on Mule 3 is another feature I'm excited about. For one reason or another, jBPM has been a constant in practically every Mule project I've worked on in the last year. I'll cover that in the next post.</div><div><br /></div><div><br /></div><div><br /></div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com4tag:blogger.com,1999:blog-3199640639108211389.post-22906960127494161312010-08-10T10:08:00.002-04:002010-08-10T10:10:28.901-04:00Mule 3.x Support for the SAAJ ModuleI just committed Mule 3.x support for the SAAJ Module. As with the MongoDB transport 3.x upgrade, this isn't in Maven. You'll need to build it from the SVN branch <a href="http://svn.muleforge.org/saaj/branches/mule-3">here</a>.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-12258665453991415832010-08-10T09:21:00.003-04:002010-08-10T09:23:49.725-04:00Mule 3.x Support for the MongoDB TransportI've just created a branch of the MongoDB transport with support for Mule 3.0.0-M4. This isn't in in the Maven repo yet, but you can build and install locally for yourself from <a href="http://svn.muleforge.org/mongodb/branches/mule-3/">here</a>.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-55039641861087783072010-08-05T07:28:00.003-04:002010-08-05T07:50:25.390-04:00MongoDB Transport 2.2.1.5 ReleasedI just pushed version 2.2.1.5 of the <a href="http://www.mulesoft.org/documentation/display/MONGODB/Home">MongoDB transport</a> to Muleforge. This release introduces two transformers, mongodb:db-file-to-byte-array and mongodb:db-file-to-input-stream, that simplify getting at the contents of GridFSDBFiles.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-89253033415697767472010-07-22T17:01:00.003-04:002010-07-22T17:11:41.119-04:00MongoDB Transport 2.2.1.4 ReleasedI just released version 2.2.1.4 of the MongoDB transport. New features include:<br /><br />- Full support for GridFS.<br />- Support for sub collections (ie, "foo.sub1", "foo.sub2", etc.)<br />- Synchronous invocation now returns the Map as returned from the insert() call. This allows you to access the "_id" and "_ns" keys after persisting an object. Something particularly useful in conjunction with a chaining-router.<br /><br />I additionally fixed a few minor bugs. The MuleForge project page is <a href="http://www.mulesoft.org/documentation/display/MONGODB/Home">here</a> with updated documentation.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-38908299198831208162010-06-30T21:51:00.003-04:002010-06-30T21:54:34.721-04:00MongoDB TransportI just released a Mule transport for <a href="http://www.mongodb.org/">MongoDB</a>. Full details, documentation and download links are <a href="http://www.mulesoft.org/documentation/display/MONGODB/Home">here</a>.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-81264459122328989252010-05-07T21:43:00.004-04:002010-05-07T21:56:39.458-04:00jmxshI recently discovered <a href="http://code.google.com/p/jmxsh/">jmxsh</a>, which provides TCL scripting facilities to interact with JMX. Its pretty impressive when compared to something like jconsole. For instance, I was able to write the following little script to query HornetQ for the amount of messages on a queue:<br /><pre name="code" class="tcl"><br />set host [lindex $argv 0]<br />set port [lindex $argv 1]<br />set queue [lindex $argv 2]<br /><br />jmx_connect -h $host -p $port<br />set message_count [jmx_get -m org.hornetq:module=JMS,name="$queue",type=Queue MessageCount]<br /><br />puts "$message_count"<br /><br />jmx_close<br /></pre><br /><br />I can then run the script like this and it prints the amount of messages on the DLQ to stdout:<br /><br /><pre name="code"><br />./jmxsh queue_count.tcl localhost 3000 DLQ<br /></pre><br /><div><br />While the use of TCL might seem obtuse (ie, why not Groovy?), it makes sense from the standpoint of a sysadmin. The JMX agnostic language allows them to script against an app's MBeans with minimal exposure to Java or the JMX API's. <br /></div><div><br /></div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-65194305800118828192010-05-02T16:26:00.008-04:002010-05-02T18:15:47.331-04:00Component Implementation Guidelines<div>I've recently worked on a couple <a href="http://www.mulesoft.com/mule-esb-open-source-esb">Mule ESB</a> projects that make heavy use of components. The projects were similar in the sense that the component logic was implemented from scratch. We were not, for instance, exposing existing Java classes out over a transport. </div><div><br /></div><div>It was challenging at times to implement these components while maintaining best-practices with decoupling the code from Mule and unit-testing it. The below are some approaches / guidelines when implementing components that keep these goals in mind:</div><div><ul><li>Be careful when implementing <a href="http://www.mulesoft.org/docs/site/2.2.1/apidocs/org/mule/api/lifecycle/Callable.html">Callable</a>. Implementing the Callable interface gives you access to the <a href="http://www.mulesoft.org/docs/site/2.2.1/apidocs/org/mule/api/MuleEventContext.html">MuleEventContext</a> when the "onCall" method is invoked. This allows you to invoke the endpoint's transformers, access the MuleMessage directly, stop message processing, etc. It also tightly couples your component code to the Mule infrastructure, making the component more difficult to unit-test in isolation as well as refactor if the Mule API changes. See if your component's reliance on the MuleEventContext can be refactored out to transformers of exception-strategies.</li><li>Favor using a <a href="http://www.eaipatterns.com/CanonicalDataModel.html">canonical data model</a> over interacting with MuleMessage directly. You can use a transformer to move the external data into a common format, like a domain model class, an XML document or a hash-map. This again decouples your code from the Mule API, making unit-testing, refactoring, etc easier. </li><li>Hide <a href="http://www.mulesoft.org/docs/site/2.2.1/apidocs/org/mule/module/client/MuleClient.html">MuleClient</a> usage behind a service interface. MuleClient is an extremely useful facility to send and receive messages over arbitrary transports. Its usage introduces the same difficulties as the items above. This can be mitigated by abstracting the MuleClient invocations into a service class that can be injected into the component. A mock implementation of this class can then be used in unit tests. </li><li>Consider using jBPM to orchestrate activity and maintain state. Component code, along with MuleClient, can be a tempting place to orchestrate, compose and maintain state across endpoint invocations. But be careful when considering this approach. You'll need to consider what happens if the component logic is interrupted during execution, if the state needs to be transient across Mule restarts, if the state needs to be transient between Mule nodes, etc. Many, if not most, of these issues can be addressed by using jBPM in conjunction with Mule's BPM transport.</li><li>Avoid using hardcoded strings in endpoint names with MuleClient. This is obvious but is worth mentioning. Try to centralize all the endpoint names in one place (in an enum, for instance) and inject this into your service facades. This helps avoid issues with typos in endpoint addresses.</li></ul><div>Hopefully some of the above will give you component implementation / configurations that are easy to unit-test and refactor. Any other tips / guidelines are welcomed in the comments :)</div><div><br /></div></div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com2tag:blogger.com,1999:blog-3199640639108211389.post-27053338296217242332010-01-25T17:50:00.006-05:002010-01-25T19:48:12.933-05:00Mule and JMS as Decoupling MiddlewareI just finished reading <a href="http://www.michaelnygard.com/">Michael Nygard</a>'s excellent<a href="http://www.pragprog.com/titles/mnee/release-it"> "Release It!"</a>. His war stories echo a lot of my own experience, particularly when I was doing ops work full-time. Its a definite must read for developers and system administrators alike.<br /><br />In Chapter 5 Michael describes the "Decoupling Middleware" pattern. He suggests using a messaging broker to decouple remote service invocation. This allows you to leverage the features of the messaging broker, like durability and delayed re-delivery, to improve the resiliency of communication with a remote service. <br /><br />The following demonstrates how to use Mule and JMS to decouple interaction with Twitter's API. "Tweet Service"<br />transactionally accepts messages from a JMS queue and submits them to Twitter's REST API:<br /><br /><pre name="code" class="xml"><br /><model name="Twitter Services"><br /><br /> <service name="Tweet Service"><br /> <inbound> <br /> <jms:inbound-endpoint queue="tweets"><br /> <jms:transaction action="BEGIN_OR_JOIN"/><br /> </jms:inbound-endpoint><br /> </inbound><br /> <http:rest-service-component<br /> httpMethod="POST"<br /> serviceUrl="http://user:password@twitter.com/statuses/update.json"><br /> <http:requiredParameter key="status" value="#[payload:]"/><br /> </http:rest-service-component><br /> </service><br /></model><br /></pre><br /><br />Let's consider some of the benefits of this indirection:<br /><br /><ul><br /><li>The service can be taken down / brought up without fear of losing messages - they will queue on the broker and be delivered when the service is brought back up.</li><br /><li> The service can be scaled horizontally simply by adding additional instances (competing consumption off the queue)</li><br /><li>Messages can be re-ordered based on priority and sent to the remote service</li><br /><li>Requests can be re-submitted in the event of a failure.</li><br /></ul><br /><br />The last point is particularly important. Its common to encounter transient errors when integrating with remote services, particularly web-based ones. These errors are usually recoverable after a certain amount of time. If your JMS broker supports it, you can use delayed redelivery to periodically re-attempt the request. HornetQ supports this by configuring address-settings on the queue. The following address-settings for the "tweets" queue specifies 12 redelivery attempts with a 5 minute interval between each request:<br /><br /><pre name="code" class="xml"><br /><address-setting match="jms.queue.tweets"><br /> <max-delivery-attempts>12</max-delivery-attempts><br /> <redelivery-delay>300000</redelivery-delay><br /></address-setting><br /></pre><br /><br />Its incidentally easy to employ a "circuit-breaker", another one of Michael's patterns, into the above using an exception-strategy. I'll demonstrate that in another post.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com3tag:blogger.com,1999:blog-3199640639108211389.post-51813980906116790042009-09-24T14:17:00.003-04:002009-09-24T14:28:47.697-04:00HornetQ and MuleThe lack of <a href="https://mq.dev.java.net/">OpenMQ</a>'s delivery retry options have led me down the road of evaluating JMS brokers again. I luckily didn't have to look very far. JBoss' recently released <a href="http://jboss.org/hornetq">HornetQ</a> broker is very impressive. Getting it going with Mule is trivial. Here's the config I'm using, which connects to a local HornetQ instance using Netty (instead of JNDI.)<br /><br /><pre name="code" class="xml"><br /> <spring:bean name="transportConfiguration"<br /> class="org.hornetq.core.config.TransportConfiguration"><br /> <spring:constructor-arg<br /> value="org.hornetq.integration.transports.netty.NettyConnectorFactory"/><br /> </spring:bean><br /><br /> <spring:bean name="connectionFactory"<br /> class="org.hornetq.jms.client.HornetQConnectionFactory"><br /> <spring:constructor-arg ref="transportConfiguration"/><br /> <spring:property name="minLargeMessageSize" value="250000"/><br /> <spring:property name="cacheLargeMessagesClient" value="false"/><br /> </spring:bean><br /><br /> <jms:connector name="jmsConnector"<br /> connectionFactory-ref="connectionFactory"<br /> createMultipleTransactedReceivers="false"<br /> numberOfConcurrentTransactedReceivers="1"<br /> specification="1.1"><br /> </jms:connector><br /></pre>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com11tag:blogger.com,1999:blog-3199640639108211389.post-36542055169231888042009-09-01T22:52:00.008-04:002009-09-03T14:27:16.617-04:00Mule Endpoint QoS with Esper<a href="http://esper.codehaus.org/">Esper</a> is a really slick, open-source <a href="http://en.wikipedia.org/wiki/Complex_Event_Processing">CEP</a> engine I've been playing with to monitor traffic on <a href="http://www.mulesource.org/">Mule</a> endpoints. Monitoring endpoints with port checks, JMX and log monitoring gives a lot of insight into the health of individual Mule instances, but offers little insight when external services fail. An external producer of JMS messages to a queue may fail, a database may have a slow-query situation where rows take longer then expected to return or an SMTP outage may stop messages from being delivered to an IMAP server. Any of these situations would cause less then an expected amount of messages to be delivered to a JMS, JDBC or IMAP endpoint.<div><br /></div><div>By using a wiretap-router or envelope-interceptor on an inbound-endpoint, data about incoming messages can be sent to a CEP engine to construct an event stream. A query can then be written that produces an event when less messages are seen then expected on the stream. </div><div><br /></div><div>A quick demonstration of this follows. Here are a couple of Groovy scripts that will be wired up with Spring and used to monitor a CXF inbound-endpoint on a Mule instance with Esper.</div><br /><pre name="code" class="java"><br />import org.mule.api.lifecycle.Callable<br />import org.mule.api.MuleEventContext<br /><br />class EventInjector implements Callable {<br /><br /> def esperService<br /><br /> public Object onCall(MuleEventContext context) {<br /> esperService.getEPRuntime().sendEvent(context.getMessage())<br /> }<br /><br />}<br /></pre><br /><br /><div><br />This component will be used to receive messages off the wiretap and inject them into the event stream. The next component will be used to register listeners on the stream.<br /><br /><pre name="code" class="java"><br />import com.espertech.esper.client.UpdateListener<br />import com.espertech.esper.client.EventBean<br /><br />import org.mule.module.client.MuleClient<br /><br />class MuleEventListener implements UpdateListener {<br /><br /> def expression<br /> def payloadExpression<br /> def esperService<br /> def endpoint<br /><br /> def initialize() {<br /> def statement = esperService.getEPAdministrator().createEPL(expression);<br /> statement.addListener(this)<br /> }<br /><br /> public void update(EventBean[] newEvents, EventBean[] oldEvents) {<br /> def client = new MuleClient()<br /> def event = newEvents[0];<br /> client.dispatch(endpoint, event.get(payloadExpression), null)<br /> }<br /><br />}<br /></pre><br />This component code takes two Esper expressions. <code>expression</code> queries the event stream for events. <code>payloadExpression</code> populates the message payload of the new message. <code>endpoint</code> is where this message will be published to. Here is the Spring beans config that wires the two component scripts with Esper.<br /><br /><pre name="code" class="xml"><br /><beans xmlns="http://www.springframework.org/schema/beans"<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br /> xmlns:lang="http://www.springframework.org/schema/lang"<br /> xsi:schemaLocation=<br /> "http://www.springframework.org/schema/beans<br /> http://www.springframework.org/schema/beans/spring-beans-2.5.xsd<br /> http://www.springframework.org/schema/lang<br /> http://www.springframework.org/schema/lang/spring-lang-2.5.xsd<br /> "><br /><br /> <bean id="esperService" scope="singleton"<br /> class="com.espertech.esper.client.EPServiceProviderManager"<br /> factory-method="getDefaultProvider"/><br /><br /> <lang:groovy id="eventInjector"<br /> script-source="classpath:/EventInjector.groovy"><br /> <lang:property name="esperService" ref="esperService"/><br /> </lang:groovy><br /><br /> <lang:groovy id="mininumMessageListener"<br /> script-source="classpath:/MuleEventListener.groovy"<br /> init-method="initialize"><br /> <lang:property name="esperService" ref="esperService"/><br /> <lang:property name="endpoint" value="jms://topic:alerts"/><br /> <lang:property name="expression" <br /> value="select count(*) from org.mule.api.MuleMessage.win:time_batch(10, 'FORCE_UPDATE, START_EAGER') having count(*) < 5"/><br /> <lang:property name="payloadExpression" value="count(*)"/><br /> </lang:groovy><br /></beans><br /></pre><br /><br /><div><br />"mininumMessageListener" will send a JMS message to the "alerts" topic when less then 5 messages appear on the stream in a 10 second window. The following Mule config pulls all the above together.</div><br /><br /><pre name="code" class="xml"><br /><mule xmlns="http://www.mulesource.org/schema/mule/core/2.2"<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br /> xmlns:spring="http://www.springframework.org/schema/beans"<br /> xmlns:test="http://www.mulesource.org/schema/mule/test/2.2"<br /> xmlns:jms="http://www.mulesource.org/schema/mule/jms/2.2"<br /> xmlns:vm="http://www.mulesource.org/schema/mule/vm/2.2"<br /> xmlns:cxf="http://www.mulesource.org/schema/mule/cxf/2.2"<br /> xmlns:mule-xml="http://www.mulesource.org/schema/mule/xml/2.2"<br /> xsi:schemaLocation="<br /> http://www.mulesource.org/schema/mule/core/2.2 http://www.mulesource.org/schema/mule/core/2.2/mule.xsd<br /> http://www.mulesource.org/schema/mule/cxf/2.2 http://www.mulesource.org/schema/mule/cxf/2.2/mule-cxf.xsd<br /> http://www.mulesource.org/schema/mule/test/2.2 http://www.mulesource.org/schema/mule/test/2.2/mule-test.xsd<br /> http://www.mulesource.org/schema/mule/jms/2.2 http://www.mulesource.org/schema/mule/jms/2.2/mule-jms.xsd<br /> http://www.mulesource.org/schema/mule/vm/2.2 http://www.mulesource.org/schema/mule/vm/2.2/mule-vm.xsd<br /> http://www.mulesource.org/schema/mule/xml/2.2 http://www.mulesource.org/schema/mule/xml/2.2/mule-xml.xsd<br /> http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"><br /><br /><br /><spring:beans><br /><spring:import resource="classpath:spring-config.xml"/><br /></spring:beans><br /><br /><vm:connector name="vmConnector" queueEvents="true"/><br /><br /><cxf:connector name="cxfConnector"/><br /><br /><model name="main"><br /><br /><service name="soapService"><br /> <inbound><br /> <cxf:inbound-endpoint address="http://localhost:9756/people" connector-ref="cxfConnector"<br /> frontend="simple"<br /> serviceClass="org.mule.tck.testmodels.services.PeopleService"/><br /> <wire-tap-router><br /> <vm:outbound-endpoint path="cep.in"/><br /> </wire-tap-router><br /> </inbound><br /> <test:web-service-component/><br /></service><br /><br /><service name="Complex Event Processing Service"><br /> <inbound><br /> <vm:inbound-endpoint path="cep.in"/><br /> </inbound><br /> <component><br /> <spring-object bean="eventInjector"/><br /> </component><br /></service><br /><br /></model><br /><br /></mule><br /></pre><br /></div><br /><div><br /></div><div>This example is simplistic, but hopefully the usefulness of this sort of approach is obvious. One particular improvement is to use JMS instead of VM as the target of the wiretap. In this scenario, "Complex Event Processing Service" could be hosted in a separate Mule instance dedicated for event analysis. This would additionally allow horizontally load-balancing "soapService" instances to contribute to the same event stream.</div><div><br /></div><div>I'm additionally using the MuleMessage as the event type. This offers a limited view into the messages. A more useful implementation would operate on the payload of the messages, via Maps, POJO's or XML. The online Esper documentation is extremely well-written and offers examples to get that going. </div><div><br /></div><div><br /></div><div><br /><div><br /></div><div><br /></div></div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-90396134210327880752009-06-17T00:36:00.005-04:002009-06-17T11:10:18.130-04:00SAAJ Transport -> SAAJ ModuleWith the time I had off work, editing the <a href="http://www.manning.com/dossot/">book</a> and (briefly from) feeding the baby a few weeks ago, I naturally decided to add receiver functionality to the SAAJ transport. In a sleep deprived state I considered extending the HTTP and JMS transports to use SAAJ to receive and dispatch messages. After chatting with <a href="http://ddossot.blogspot.com/">David</a>, however, a cleaner approach seemed to be in order. I started to work on refactoring the code I was using in the SAAJ MessageAdapter, which moved the message payloads back and forth from SOAP, into dedicated transformer implementations: soap-message-to-document transformer and document-to-soap-message-transformer. <div><br /></div><div><div>The refactored implementation allows you to use the SAAJ transformers to transform message payloads over arbitrary transports, like VM, XMPP or file, in addition to HTTP and JMS. The transformers are available from the (renamed) <a href="http://www.mulesource.org/display/SAAJ/Home">SAAJ Module</a>. Cursory documentation and a few examples are available. The distribution link isn't working yet, but you can download the jars and get access to the pom from <a href="http://repository.muleforge.org/org/mule/modules/mule-module-saaj/2.2.1.0/">here</a>.</div><div><br /></div></div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com0tag:blogger.com,1999:blog-3199640639108211389.post-54837657577375128622009-04-10T15:20:00.006-04:002009-04-10T17:07:07.923-04:00Mule SAAJ TransportI've recently been working on a couple of projects that use complex SOAP API's. One of these API's is specified with a 1.5 megabyte WSDL along with something like 50 megabytes worth of <a href="http://ws.apache.org/axis/">Axis</a> generated stub classes. Since I only needed to use a small subset of the WSDL's methods, I wasn't thrilled with dealing with the WSDL or the Axis stubs directly. As we're using <a href="http://www.mulesource.org/display/MULE/Home">Mule</a>, this would have involved using either a CXF WSDL outbound-endpoint or maybe using the stub classes in a component. <br /><br />Since I knew what the SOAP payloads look like (the SOAP body content), what I really wanted to do was just build this XML and pass it to an endpoint. It would also be nice if this endpoint dynamically set the SOAP message headers, extracted the SOAP body from the response and applied the response transformers (and perhaps got me a beer.)<br /><br />I didn't see an obvious way to do with the CXF transport, so I took a stab at implementing such a transport myself. I had used <a href="https://saaj.dev.java.net/">SAAJ</a> in a web-services proxy project I worked on last year and it seemed like a good fit. As such, I present the <a href="http://www.mulesource.org/display/SAAJ/Home">SAAJ-Transport</a>. You can currently use it to pass arbitrary XML that is used as the SOAP body in messages sent to a SOAP endpoint. The endpoint handles constructing the SOAP message for you, adding the headers and extracting the SOAP body from the response (it won't get you a beer...yet.) Here's an example using a chaining-router to send a SOAP message and send the response to a VM endpoint.<br /><br /><pre name="code" class="xml"><br /><outbound><br /> <chaining-router><br /> <saaj:outbound-endpoint address="${service.url}" synchronous="true"><br /> <transformers><br /> <transformer ref="templateToRequest"/><br /> <mulexml:xml-to-dom-transformer returnClass="org.w3c.dom.Document"/><br /> <saaj:document-to-soap-message-transformer/><br /> <saaj:mime-header-transformer key="Cookie" value="#[header:SERVICE_SOAP_SESSION_ID]"/><br /> </transformers><br /> </saaj:outbound-endpoint><br /> <vm:outbound-endpoint path="service.dispatcher"><br /> <transformers><br /> <saaj:soapbody-to-document-transformer/><br /> <mulexml:dom-to-xml-transformer returnClass="java.lang.String"/><br /> </transformers><br /> </vm:outbound-endpoint><br /> </chaining-router><br /></outbound> <br /></pre><br /><br />The "document-to-soap-message-transformer" takes an org.w3c.dom.Document, transforms it to a SAAJ SOAPMessage and uses SAAJ to invoke the web-service. The mime-header-transformer adds a MIME header to the message (in this case a Cookie). Existing properties on the MuleMessage will be added the the SOAP header. When the response is received, the transport will extract out the SOAPBody and return it as the synchronous response, as well as set any SOAP headers as properties on the MuleMessage. In this case, the response is transformed back to a Document, then to a String, then finally passed out on the VM endpoint.<br /><br />I'm hoping next week to get full documentation and examples up on the MuleForge page. I'm also planning to work on receiver functionality. This would allow you to receive SOAP messages on an inbound-endpoint and have their bodies extracted, headers set as Mule properties, etc. I'm still working on getting the distribution together. For now you'll need to checkout the source and use "mvn clean package" to build the jar or "mvn clean install" to get them into your local repository.johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com2tag:blogger.com,1999:blog-3199640639108211389.post-79701215737727384212009-03-03T14:11:00.006-05:002009-03-03T17:34:07.124-05:00Mule, Smooks and NagiosI've been working on upgrading our integration infrastructure on and off since the new year. This began with the OpenMQ migration I previously blogged about and was followed by upgrading our Mule 1.4.3 services to Mule 2.x. In addition to the technology changes, I wanted to use the upgrade as an excuse to clean-up some messy stuff we had in place. An example of which being the amount of custom transformation we were doing in Java code.<div><br /></div><div>Our integration implementation makes heavy use of the <a href="http://www.enterpriseintegrationpatterns.com/CanonicalDataModel.html">Canonical Data Model</a> pattern. To shortly sum it up , we accept data in a variety of formats (XML, CSV or proprietary) and map them to an XML schema and/or Java object model. Beyond the standard transport transformations supplied by Mule, we needed to implement a zoo of custom transformers to move to the canonical format. I was looking for a way to mitigate this complexity overhead in some sort of framework.<br /><div><br /></div><div>I had read <a href="http://www.infoq.com/articles/event-streaming-with-smooks">this</a> article on InfoQ about <a href="http://www.smooks.org/">Smooks</a> around when thinking about the above and it seemed like a good fit, especially since there is a <a href="http://www.mulesource.org/display/SMOOKS/Home">Mule module</a> for it. To make a long story short, we were able to upgrade to Mule 2.x and, using Smooks, not have to implement any model specific Mule transformers. </div><div><br /></div><div>Smooks works by streaming data in, transforming it and streaming it out. "Cartridges" supply various transformation capabilities and exist for common data formats like XML, JSON and CSV. The streaming model means that the transformations themselves don't require the entire documents to be loaded in memory. This allows for large documents to be transformed without requiring the associated memory footprint.</div><div><br /></div><div>The transformations can be accomplished via XML configuration assuming the data formats being used have an associated cartridge. This is also the case if the data is in a format you can easily move to a different format. For instance, we have Nagios 2.x instances that use a semi-colon delimited status.log to write alert data. A simple Groovy script allowed me to replace the semi-colons with commas. I was then able to use the CSV cartridge to convert the data to XML.</div><div><br /></div><div>The above Nagios instances are being upgraded to Nagios 3.x. In Nagios 3.x, the status.log format is different. Instead of being semi-colon delimited, it is in a proprietary format that sort of looks like JSON. Here's an example:</div><br /><pre name="code" class="Cpp"><br />servicestatus {<br /> host_name=liro_url_laces0<br /> service_description=liro_https://acmesoft.com/VI/Pages/General/TestConn.aspx<br /> modified_attributes=0<br /> check_command=check_https!/VI/<br /> check_period=24x7<br /> notification_period=24x7<br /> check_interval=15.000000<br /> retry_interval=2.000000<br /> event_handler=<br /> has_been_checked=1<br />..<br />}<br /></pre>There obviously isn't a Smooks cartridge that supports this format. One solution might be to try to convert the above format to JSON. This will probably work but likely be error-prone (and annoying to implement.) An alternative is to implement an XMLReader to parse the above file and spit out an XML Document. </div><div><br /></div><div>Smooks uses implementations of XMLReader to parse arbitrary file formats as XML. It then operate on the SAX stream or DOM as dictated by a configuration file. The following illustrates an implementation of the parse method of XMLReader that will parse the status.log format above:</div><br /><br /><pre name="code" class="Java"><br />public void parse(InputSource inputSource) throws IOException, SAXException {<br /> if (contentHandler == null) {<br /> throw new IllegalStateException("'contentHandler' not set. Cannot parse Email stream.");<br /> }<br /><br /> String currentBlock = null;<br /><br /> contentHandler.startDocument();<br /> contentHandler.startElement(XMLConstants.NULL_NS_URI, "statusLog", "", EMPTY_ATTRIBS);<br /><br /> for (String line : getString(inputSource).split("\n")) {<br /><br /> if (line.startsWith("#"))<br /> continue;<br /><br /> if (line.contains("servicestatus")) {<br /> String block = StringUtils.deleteWhitespace(line.split("\\{")[0]);<br /> contentHandler.startElement(XMLConstants.NULL_NS_URI, block, "", EMPTY_ATTRIBS);<br /> currentBlock = block;<br /> }<br /><br /> if (currentBlock != null) {<br /> if (line.contains("=")) {<br /> String[] fields = line.split("=", 2);<br /> String fieldName = StringEscapeUtils.escapeXml(StringUtils.deleteWhitespace(fields[0].replace("=", "")));<br /><br /> contentHandler.startElement(XMLConstants.NULL_NS_URI, fieldName, "", EMPTY_ATTRIBS);<br /> if (fields.length > 1) {<br /> String content = StringEscapeUtils.escapeXml(fields[1]);<br /><br /> contentHandler.characters(content.toCharArray(), 0, content.length());<br /> } else {<br /> contentHandler.characters(" ".toCharArray(), 0, 1);<br /> }<br /> contentHandler.endElement(XMLConstants.NULL_NS_URI, fieldName, "");<br /> }<br /><br /> if (line.contains("}")) {<br /> contentHandler.endElement(XMLConstants.NULL_NS_URI, currentBlock, "");<br /> currentBlock = null;<br /> }<br /> }<br /><br /> }<br /><br /> contentHandler.endElement(XMLConstants.NULL_NS_URI, "statusLog", "");<br /> contentHandler.endDocument();<br />}</pre><br /><div>We can plug the reader into the Smooks XML config :</div><br /><pre name="code" class="xml"><br /><smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd"<br /> xmlns:csv="http://www.milyn.org/xsd/smooks/csv-1.1.xsd"<br /> xmlns:ftl="http://www.milyn.org/xsd/smooks/freemarker-1.1.xsd"<br /> ><br /><br /> <params><br /> <param name="stream.filter.type">SAX</param><br /> <param name="default.serialization.on">false</param><br /> </params><br /><br /> <reader class="net.opsource.osb.reader.NagiosReader"/><br /><br /> <resource-config selector="servicestatus"><br /> <resource>org.milyn.delivery.DomModelCreator</resource><br /> </resource-config><br /><br /> <ftl:freemarker applyOnElement="statusLog"><br /> <ftl:template><!--<br /> <ApplicationResponseTimes><br /> <?TEMPLATE-SPLIT-PI?><br /> </ApplicationResponseTimes><br /> --><br /> </ftl:template><br /> </ftl:freemarker><br /><br /> <ftl:freemarker applyOnElement="servicestatus"><br /> <ftl:template>smooks/monitoring/application_response_time/metric.ftl</ftl:template><br /> </ftl:freemarker><br /><br /></smooks-resource-list><br /><br /></pre><br /><div><br />Now we plug it into Mule using the Smooks module and we're ready to go.<br /></div><br /><pre name="code" class="xml"><br /><smooks:transformer name="nagiosStatusLineToXML"<br /> configFile="smooks/monitoring/application_response_time/smooks-config.xml"<br /> resultType="STRING"/></pre><br /><br /><div>I'm pretty excited about this because I'm no longer writing a dedicated transformer for each domain model I'm mapping data to. I just need to implement XMLReaders when I come across a data format not already supported by a Smooks cartridge. </div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com2tag:blogger.com,1999:blog-3199640639108211389.post-85754316647530917532009-01-15T10:23:00.003-05:002009-01-15T11:02:19.651-05:00OpenMQ, Second Thoughts OpenMQ was fairly painless to get going with Mule. I opted to set it up in our staging environment as a conventional cluster with 2 nodes. In this scenario, clients can maintain a list of brokers to connect to in the event one of them fails. Loadbalancing between brokers might also be supported, but I haven't gone too far down that rabbit hole yet. <div><br /></div><div>Nor have I gone down the rabbit hole of HA clusters, which support failover of message data between brokers but require a shared database. Amongst other things, we're using JMS to distribute monitoring alerts. To do HA in production in a sensible manner, we'd need to back OpenMQ against our production MySQL cluster. Since we're sending monitoring data about the same production MySQL cluster over JMS, if the MySQL cluster failed we'd never hear about it. We're not (currently) using JMS for any sort of financial or mission-critical data so losing a few messages in the event of a failover isn't too big of a deal for us as long as its a rare occurrence. </div><div><br /></div><div><div>Getting clients to connect to OpenMQ was a little more painful. It manages all of its "objects" (connection factories, queues, topics, etc) in JNDI. We don't really use any sort of distributed JNDI infrastructure, so I started off using the filesystem JNDI context supplied with OpenMQ. In this scenario, all your OpenMQ objects are stored in a hidden file in a directory. This works fine in development or testing situations when your clients and broker all have access to the directory. Its obviously not an option for production, unless you do something painful like make tarballs of the JNDI filesystem directory and scp them or around or export it over NFS. </div><div><br /></div><div>According to the documentation, the "right" way seems to be by using an LDAP directory context to store the JNDI data (someone please correct me if I'm wrong about this.) In this case, you store your OpenMQ objects to LDAP. Each client then loads the appropriate connection factory, queus, etc from LDAP. This is nice in the sense that configuration data for the connections (broker lists, etc) are maintained outside of the clients. Presumably this allows you to add brokers to a cluster, etc w/o having to restart your JMS clients. </div><div><br /></div><div>Despite the bit of complexity, this again was pretty straightforward. I just needed an LDAP directory to store the JNDI data in. It (briefly) occurred to me to use our Active Directory deployment. My assumption was, however, that this would involve modifying Active Directory's schema which I've never done before and heard nightmare stories about(it would also involve making changes to the production AD deployment - which is treated akin to a live hand grenade in the company.)</div><div><br /></div><div>I ultimately opted to use OpenLDAP. This was painless. The only thing I had to do was include the supplied java.schema in the the slapd.conf and restart the service. A short while later I was able to get Mule and JMeter sending messages through it. The OpenMQ command line stuff worked great while doing some preliminary load testing. The queue metrics in particular were really nice - it lets you watch queue statistics the same way you'd watch memory statistics with vmstat or disk statistics with iostat. I am pretty impressed so far...</div><div><br /></div></div>johndemichttp://www.blogger.com/profile/12041010690064212663noreply@blogger.com10