Tuesday, December 28, 2010

Changing servers for business rules deployment

In our migration to webmethods 8 we changed server names as we have 7 and 8 running concurrently in dev.

In order to deploy our business rules to the integration server, I loaded them into Blaze 6.8, but I didn't see a way to change the server.  Searching the files in the repository while blaze was closed revealed 2 places where the hostname and connection info is stored in plain text:
<Project Name>

  <instance ref='IS Host'>
   <instance ref='value'>
    <value>hostname</value>
   </instance>


<Project Name>.innovator_attbs
 managementProperty.IS\ Host=hostname


That worked, then I had to regenerate my business object model from IS.  The object hadn't changed, but would not sync up.  After regenerating it deployed fine.

Migration to Webmethods 8 - more info

Found this guide while searching about:

http://www.scribd.com/doc/35935457/8-x-Web-Methods-Upgrade-Guide

Handling CSV files

We were interfacing to external vendor who had a CSV restful interface.  I had built a system for creating and parsing the xml files.  Then my co-worker reminded me about the built in flat file handling routines.  Duh!

  1. Create a flat file dictionary with all the different cvs formats you will need.
  2. Then create a flat file schema for each transaction (referencing the object in the dictionary).
  3. Create a document definition from the schema
  4. Take the csv data and send it to pub.flatFile:convertToValues.  Send in the appropriate schema into the ffSchema entry on the service input.  (You can just copy the schema from the navigator and paste it into the value for ffSchema).
  5. map ffValues to a document reference list of the document you created in step 3.
It was easy and straightforward and it handles all the parsing for you.

Monday, December 27, 2010

Migration to Webmethods 8

Based on errors we were having in 7.1.3 (we think our install was corrupt) we finally decided to migrate to 8.

Here's what we've done so far.
  1. Installed and configured IS8 and MWS8.
  2. Installed JDBC drivers
  3. Exported all the IS Packages we wanted to keep from 7 and placed in replicate/inbound on IS 8 and installed them from there.
  4. Copied properties files from 7 to 8
  5. Setup all existing scheduled tasks
  6. Synced all the publishable documents in the WM8 broker
  7. Set the permissions for all our web service connectors to Anonymous (it seemed to revert to the default when installed from the inbound directory).
  8. Created all the web service consumer aliases.

This got IS up and running and it seems fine.  We still need to regression test it, but all the packages passed basic tests.


Then I moved on to deploying our process models from WM7 into WM8.  When I opened the projects I had to change the jre from 150 to 160.  I also had to fix a couple library issues.  (We didn't want to try and deploy the process models directly into 8 as we have had numerous problems with them, so we wanted to start from scratch on them.)

I also created a new process.  This process is working fine in MWS8.  The ported processes are getting a null pointer exception on the task interface.  I'm still investigating that. 

But so far, the IS upgrade was pretty smooth.  The MWS upgrade is still taking some time.  More on that later.

Wednesday, December 22, 2010

Sending high priority emails

Use PSUtilities.email.smtp (instead of pub.client.smtp) and set priority = 1.

Tuesday, December 21, 2010

Hashtable/Hashmap

Wow, you would think this would score higher on google searches, but after several pages of results I finally found PSUtilities.hashtable.   I was going to write up something or try and find a clever way to build a document so it looked like a hashmap.  Glad I didn't waste that time but it sure wasn't easy to find!

Friday, December 17, 2010

Changing connections for an existing JDBC adapter

So you have your adapter setup with all 40 fields mapped and the 2 table join with a nice complex where clause.  Now you find out you need to use a different connection for whatever reason.  If you don't know, it looks like you have to delete and recreate your adapter.  There isn't anywhere in the adapter to change the connection.

Enter the WmART package.
For each adapter service, call pub.art.service:setAdapterServiceNodeConnection.  Pass in the serviceName (copy and paste from the navigator) and the new connection alias name.  Now the adapter will point to the new connection alias!

Transactions from within a Process Flow

Using JDBC Adapter 6.5, patch 23 in IS 7.1.3 we were having an issue where a flow service that inserts data in to 3 tables (each table insert via a JDBC Adapter service) does not roll back properly on a failure when called from within a process flow.

When run from within Developer, it would work fine and would roll back as you would expect.  As soon as we called it on the "Save to Database" step in our process, it would no longer roll back.

Trying XA_TRANSACTION and LOCAL_TRANSACTION as the transaction type on the JDBC Connector does not make a difference.
Basically, we were getting auto commit on each insert when the flow service is invoked from a Process Model.

In reviewing the logs, we see that

When the Process starts, a "Beginning transaction" is logged.
When the Process finishes, a "Committing transaction" is logged.
The process always does a commit - even when my IS flow service Exits $flow with FAILURE


We were not able to do Explicit transaction management because the JDBC Adapter Services were using a JDBC Adapter connection that was already in the parent transaction context. I needed to create a new JDBC Adapter connection (in IS) and change my JDBC Adapter Services to use the new connection. Then I could explicitly call startTransaction and or commitTransaction/rollbackTransaction in my flow service.

Hope this helps others.  Thanks to Brian for figuring this out!

Wednesday, December 8, 2010

Process step delay

Why oh why doesn't webMethods have a delay step?  Seems a common thing to want.

So far the best solution I've seen on the forums is this:
  1. For a delay between step 1 and step 2, Setup a dummy receive step R
  2. Setup a join between step 1 and R with a join timeout of the desired timeout
  3. Set the transition going from the join to step 2 with a condition of timeout
It's putting a technical workaround inside your process (which is not pretty), but you don't want to call a Thread.sleep as that will tie up threads while waiting.


Creating a sub-process can at least hide the complexity and technical details and make the process flow more readable.

Monday, December 6, 2010

KPIs not working? Things to check:

  1. Make sure the fields are logged.  In developer on the properties tab for the process step, select the "Logged Fields" tab and then choose which fields will be used for the KPI data and dimensions.  (See below)
  2. Ensure the facts are being collected. In your process database, you should have FACT_ tables with data being added.
  3.  If the new KPI is going against existing data, it may take a while to calculate and display it.
  4. Check the Event Map for the process/kpi.  Go to Administration->Analytics->KPIs->BusinessData.  Below there should be an Event Map that matches your process.  It should be marked "deployed".  If you open up the (+) sign, you should see your fact listed (which should match what you set up in designer) and below that any dimensions you have configured.


  5. Check if Monitoring->System-Wide->System Overview shows your event map. (You can search to narrow down to the name).
  6. Check if Monitoring->System-Wide->Problems has any entries which might explain your KPI problems.

Sorry I don't have the answers to every problem you might encounter here, but at least this might help you track down the root cause.  I've had to call support to get them working both times I've tried.  The first time I wasn't logging the fields (duh).  The second time my event map was not correct.  The fact name didn't match the fact being collected, so it wasn't resolving.