Friday, October 23, 2009

Mule POC Part III

Moving on to my next and last interface for my Mule POC, I chose our refund check processing system. Currently this system using IBM Message Broker that accepts a message from a queue, transforms the XML message and inserts into an Oracle database. Records are accumulated until a timed event occurs and another message broker flow reads the records from the database and generates an Excel csv file and emails this to our finance department.

I chose to start with the reading from the database part of this process as I was interested in seeing how easy or hard it was to access a database and also sending an email attachment through Mule.

My ever so helpful Solutions Architect from MuleSoft, Puneet was able to come by a give my a head start. Before he arrived I made sure I at least got the new Mule project started by myself as I wanted to test my knowledge of what I had learned from my first POC interface with Mule.

The first 'mistake' I made in my solo effort to start a new project was to use New/Other/Mule Project to get started. Although this is the simplest and most intuitive way to get started, as Puneet pointed out, it is better to use Maven and a POM file to seed the project structure then import the project into Eclipse.

I not that familar with Maven but am learning to like it. We are more of an Ant and Ivy shop here but so far I like working with Maven. I especially like that it combines build dependency management and code repository functions in one package.

JDBC Connector and Data Source

The first order of business for me was to hook up a JDBC connector so I could read our refund check database table. My first try at this did not work, I used the org.springframework.jdbc.datasource.DriverManagerDataSource. This data source was referenced in the "Mule in Action" book but when Puneet finally arrived to assist me, he suggested to use the org.enhydra.jdbc.standard.StandardDataSource instead.

Our course you need to include schema references and locations in the mule configuration and update your Maven POM file with dependencies. Below are the data source and jdbc connectors:

<spring:bean id="dataSource"
class="org.enhydra.jdbc.standard.StandardDataSource">
<spring:property name="driverName" value="oracle.jdbc.driver.OracleDriver" />
<spring:property name="url" value="jdbc:oracle:thin:@localhost:1521:TOOLSDB" />
<spring:property name="user" value="user" />
<spring:property name="password" value="****" />
</spring:bean>

<jdbc:connector name="jdbcConnector" dataSource-ref="dataSource">
<jdbc:query key="refundQuery" value="SELECT lst_nm, fst_nm, addr_ln1_txt, addr_ln2_txt, addr_ln3_txt, addr_ln4_txt, city_nm, st_prov_cd, zip_cd, phn_nbr, rfnd_amt, rfnd_dt, cust_nbr, inv_nbr, rfnd_chk_nbr, orig_ord_nbr FROM YANTRA_GIDFINANCE1_TMP_T where CRT_DT < sysdate" />
</jdbc:connector>

It is pretty straight forward, I am using the Oracle 14 thin client odbc driver jar. I added the dependency to the Oracle driver in the project Maven POM file and inserted the jar in the Maven project repository.

mvn install:install-file -DgroupId=com.oracle -DartifactId=oracle.driver.OracleDriver -Dversion=14.0 -Dpackaging=jar -Dfile="D:\\workspace\\GIDFinancePOC\\lib\\ojdbc14-10.2.0.1.0.jar"

Within the jdbc connector, there is the query to be run. It simply returns all rows for the listed columns that are less than sysdate. I tested this by first sending the output to an outbound, pass through router using stdio. It worked fairly easily after I fixed my botched SQL syntax errors. I am such a SQL klutz.

CSV File Output

Encouraged by the relative ease of getting the refund check data rows, I jumped right into using a file outbound end point. Realizing we would need a transformer to get our database results into a CSV file, Puneet suggested the FlatPack package once again to help. He pointed me to the MapsToCSVTransformer and we plugged it into the outbound file connector.

After building and running this configuration we quickly discovered this didn't fly. The output from our jdbc inbound endpoint was gererating a Map when the CSVTransformer was expecting a list. Strange, because the name of the transformer is MapsToCSVTransformer?

The MapsToCSVTransformer accepts a mapping file to configure the columns to generate in the CSV and a delimter:

<custom-transformer name="MapsToCSVTransformer"
class="com.mulesource.mule.transport.jdbc.transformers.MapsToCSVTransformer">
<spring:property name="mappingFile" value="map.refundcheck.xml"/>
<spring:property name="delimiter" value=","/>
</custom-transformer>

The map file details are here:

<?xml version="1.0"?>
<!-- DTD can be pulled from the Jar or over the web-->
<!DOCTYPE PZMAP SYSTEM "flatpack.dtd" >
<!--<!DOCTYPE PZMAP SYSTEM "http://flatpack.sourceforge.net/flatpack.dtd" >-->
<PZMAP>
<COLUMN name="lst_nm" />
<COLUMN name="fst_nm" />
<COLUMN name="addr_ln1_txt" />
<COLUMN name="addr_ln2_txt" />
<COLUMN name="addr_ln3_txt" />
<COLUMN name="addr_ln4_txt" />
<COLUMN name="city_nm" />
<COLUMN name="st_prov_cd" />
<COLUMN name="st_prov_cd" />
<COLUMN name="st_prov_cd" />
<COLUMN name="zip_cd" />
<COLUMN name="phn_nbr" />
<COLUMN name="rfnd_amt" />
<COLUMN name="rfnd_dt" />
<COLUMN name="cust_nbr" />
<COLUMN name="inv_nbr" />
<COLUMN name="rfnd_chk_nbr" />
<COLUMN name="orig_ord_nbr" />
</PZMAP>

To fix the Map to List issue, Puneet quickly created a MapToList custom transformer and tried it out. Well, it worked, kind of. But now we were getter a CSV file for each row! As there were well over 500 rows, I quickly killed the running Mule server. I made an attempt to add an outputAppend option to the file connector, but this didn't work either. Back to the drawing board.

My next post will hopefully resolve this issue such that we get one CSV file containing all the rows returned from the jdbc connector. Stay tuned.

Here is the pass through router and file connector used in this project:

<file:connector name="fileConnector" outputAppend="true" />

<pass-through-router>
<file:outbound-endpoint connector-ref="fileConnector" path="data" outputPattern="refundcheck-#[UUID].csv" transformer-refs="MapToList MapsToCSVTransformer" synchronous="true"/>
</pass-through-router>

Tuesday, October 13, 2009

How to Disable Proxy Lookup with Subversion 1.6.5 in Windows

Recently our development subversion server was updated and my happy svn existence was interrupted by a mysterious lack of access and a smugly returned and vague response to svn commands:

svn: OPTIONS of 'http://svn.ourcompany.com/svn/integration/trunk': 200 OK (http://svn.ourcompany.com)

By adding the following lines to C:\\Documents & Settings\\{your-name}\\Application Data\\Subversion\servers file, I was able to fix the problem:

[global]
http-proxy-exceptions = svn.ourcompany.com
http-library=neon
#neon-debug-mask=138


My network connections exceptions (Control Panel//Internet Connections//Connections/LAN Settings//Advanced//Exceptions) was ignored by SVN.

By the way, that last line neon-debug-mask=138 was very helpful in getting to the root cause by displaying a trace of what SVN was doing while servicing my request. What also helped was Wireshark. Wireshark allows you to trace your network connections and filter message content based on source and destination addresses and protocol types.