Wednesday, November 6, 2013

Passing XML To An XML Datatype Stored Procedure Parameter Using WCF-SQL Adapter

Sometimes a collection or complex data structure needs to be delivered to a SQL database for storage.  Strongly typed stored procedure schemas do not make it apparent what is required to pass XML data type parameter values (they are presented as xs:string types).

What you need to do is pass the XML parameter content as a CDATA section.  The CDATA section is stripped by the adapter/binding and inserted as required.

Your BizTalk map needs to compose the request document using some XSL-T like this (where ContentParam is the parameter name and //SrcContent is a parent element containing a well-formed XML structure):

<ContentParam xmlns="">
    <xsl:variable name="CDATABegin" select="string('&lt;![CDATA[')" />
    <xsl:variable name="CDATAEnd" select="string(']]&gt;')" />
    <xsl:value-of select="$CDATABegin" disable-output-escaping="yes"/>
    <xsl:copy-of select="//SrcContent/@*" />
    <xsl:copy-of select="//SrcContent/*" />
    <xsl:value-of select="$CDATAEnd" disable-output-escaping="yes"/>

Tuesday, February 12, 2013

"Error saving map. Stored procedure returned non-zero result." error message when you deploy the BizTalk Server 2009 applications in BizTalk Server 2009 Administration Console

A KB article related to this issue in BizTalk 2010 can be found here:

However, somewhere between BizTalk 2009's CU3 and CU6 release this same issue was introduced to BizTalk 2009.  At the time of writing, there is no known fix from Microsoft but what you can do is temporarily rollback the BizTalkMgmt.dbo.dpl_SaveMap sproc so you can continue importing applications.

To rollback the sproc to CU3 state remove the highlighted code block below that has been introduced by a CU:

ALTER PROCEDURE [dbo].[dpl_SaveMap]
    @ArtifactId int,
    @AssemblyId int,
    @IndocDocSpecName nvarchar (256) ,
    @OutdocDocSpecName nvarchar (256) ,
    @ArtifactXml ntext
    if not (exists(select * from bt_DocumentSpec where docspec_name = @IndocDocSpecName) and
        exists(select * from bt_DocumentSpec where docspec_name = @OutdocDocSpecName))
        return -1 --Fail if in and out schemas of a map are not present

    DECLARE @shareid  uniqueidentifier
    SELECT @shareid = newid()
    INSERT INTO bt_XMLShare( id,target_namespace, active, content )
        VALUES( @shareid, N'', 1, @ArtifactXml )
    INSERT INTO bt_MapSpec(
    VALUES( @ArtifactId, @AssemblyId, @shareid, @IndocDocSpecName, @OutdocDocSpecName )
    RETURN 0

Monday, November 22, 2010

WCF SQL Adapter Composite Operation Timeout

Microsoft.ServiceModel.Channels.Common.InvalidUriException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. ---> System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

This error occurs when using a WCF SQL adapter composite operation with more operations than maxConnectionPoolSize where each operation returns a record set. This is independent of whether useAmbientTransaction is enabled or not. The result set could be as simple as @@ROWCOUNT.

It seems there may be an IDataReader created for each result set. Each result stream retains an open connection until all the requests have been executed. If there are more requests than there are connections available in the connection pool the process will exhaust all available connections in the pool and subsequently fall over.

To fix the issue, redesign the solution or remove the result set from the equation.

[Update] A colleague of mine reworked his sproc to use output parameters to allow a simple response to be returned and avoided this timeout issue.

Tuesday, November 9, 2010

DNS Aliases And DisableLoopbackCheck

It makes sense to employ CNAME records when configuring a BizTalk infrastructure. That is, SQL Server and Enterprise Single Sign-On Master Secret Server (ENTSSO MSS) DNS aliases. In the typical High Availability (HA) model SQL Server and the ENTSSO MSS are clustered and a Network Name and IP Address cluster resource exist for both of these entities. Some smaller implementations, transitioning or lower tier environments may not be HA but still take advantage of DNS aliases. For these non-HA environments the DisableLoopbackCheck will need to be activated to allow the ENTSSO MSS service to firstly start and secondly access and return the master secret to client services.

A few ways to determine if this setting applies to an implementation are:
  1. SQL Server and ENTSSO MSS are not clustered and hosted on the same Windows 2003 SP1 (or greater) server;

  2. The SQL environment has not been tuned as per the BizTalk database optimisation guidlines for Analysis Services;

  3. You cannot open a session to the SQL instance through SQL Management Studio using the server's FQDN or DNS alias from a SQL server RDP or console session due to a 'login with user [blank]' error;

  4. When executing ssoconfig -setdb SSODB on the master secret server you get an error that reads something like 'SQL Server instance not found';

Tuesday, November 2, 2010

Enlist In A Send Pipeline Transaction From A Custom Pipeline Component

A recent messaging project I worked on required executing a database update when a send port succeeded but not when it failed. It has been shown that one can leverage a receive pipeline's underlying transaction but not a send pipeline's. Enter IPersistQueryable.

By implementing IPersistQueryable your unit of work, for example a transaction status update, can be added as a transactional custom BAM event. The EventStream property of the pipeline context exposes the StoreCustomEvent method that takes your IPersistQueryable implementation.

Essentially, this method of transaction enlistment assures your application's 'intention' to update an integration status, for example, when the send port succeeds. If the send port fails your IPersistQueryable instance will be rolled back out of the TDDS queue.

This solution may not suffice depending on your requirements but it means you can execute a unit of work (out-of-process too) only when the send port succeeds. Plus, in the case of the work unit failing you have a record of the exception in the BAMPrimaryImport.dbo.TDDS_FailedTrackingData table.

Should your IPersistQueryable implementation fault when executed by the TDDS a retry will be attempted. I think there are 3 retries. Either way, the piece of work being performed should be fault tolerant because you cannot enlist in a further transaction within your IPersistQueryable implementation. That is, the unit of work must be able to be executed multiple times resulting in the same final state.

Here's an example of a pipeline component loading the TDDS event and the event itself.

public class StatusUpdater : IComponent, IComponentUI, IBaseComponent
    #region IComponent Members

IBaseMessage IComponent.Execute(IPipelineContext context, IBaseMessage message)
string messageId = (string)message.Context.Read("InterchangeID", BT_SYS_PROPS_NS);
        IPersistQueryable updateIntention = new MarkMessageAsSent(messageId);
EventStream tdds = context.GetEventStream();
return message;




public class MarkMessageAsSent: IPersistQueryable
private readonly string _messageId;

public MarkMessageAsSent(string messageId)
        _messageId = messageId;

void IPersistQueryable.AddToBatch(SqlConnection connection, IBatch b)

Type IPersistQueryable.BatchType
get { return typeof(MarkMessageAsSent); }

private BAMEventsRecord _parentRecord;
BAMEventsRecord IPersistQueryable.ParentRecord
get { return _parentRecord; }
set { _parentRecord = value; }

void IPersistQueryable.PersistQueryable(SqlConnection conn, SqlTransaction xact, int timeout)
using (SqlConnection nonDtaConnection = new SqlConnection(NON_DTA_CONNECTION_STRING))
using (SqlCommand nonDtaCommand = nonDtaConnection.CreateCommand())
                nonDtaCommand.CommandText =
"@messageId", _messageId);
                nonDtaCommand.CommandType =

Wednesday, May 19, 2010

AppDomains, ESB Faults and Context Properties

Last week threw up an interesting problem in the fault message creation module of the BizTalk ESB. The effect of which meant that our fault handler orchestration could not perform callbacks which in turn resulted in orphaned subscriptions. For more detail on the fault handler and subscription model refer to this previous post about the Scatter Gather pattern.

Here's some background information to provide you with some context (pun intended) before explaining the underlying problem and the final solution.

BizTalk Application Domain Unload Thresholds

BizTalk host instance AppDomain creation parameters can be specified in BTNTSvc.exe.config as described here. For the purposes of this post the important things to note in that MSDN article are the assembly unload thresholds named SecondsIdleBeforeShutdown and SecondsEmptyBeforeShutdown. Setting each to -1 will force the AppDomain to never unload assemblies during idle periods (where there are only dehydrated or suspended instances hosted) or periods of inactivity (no instances hosted or interchanges processed at all). However, the defaults for those settings are 20 minutes and 30 minutes, respectively. So, if you have an orchestration dehydrated for 21 minutes and no other interchange activity since the dehydration the AppDomain assemblies will be unloaded. This shutdown behaviour undermines some solution architecture decisions like localised caching, for example.

ESB Fault Message Creation

The ESB Guidance framework provides a set of exception handling helpers that your ESB services use to create fault messages. Messages can be attached to the fault message too. When doing so the ESB will serialise the message and the associated context properties into the fault's property bag. Unfortunately, the BizTalk message object model lets the ESB down at this point as explained below.

Context Properties

Message context properties cannot be iterated over but must be 'fished for' through the API. To fish for context properties you need a .Net type as bait and the most accessible context property type source is the assembly collection already loaded into the current AppDomain...

Note that by iterating over the XLANG segment collection you can actually get access to the context property collection of an XMessage. However, the context properties are in an XmlQName collection. This is insufficient access considering BizTalk only affords an API that takes .Net context property types to reconstitute message context and the XmlQName type doesn't contain fully qualified type names.

The Problem

When the following criteria are met some context properties will not be serialised with the faulted message and a resubmission of the message may not be routeable:
  1. Orchestration instance X appends a message to a fault message and it does not have a reference to the property schema assembly of a context property associated with the faulted message; and

  2. Orchestration instance X was:
    • Dehydrated for a period longer than SecondsIdleBeforeShutdown and there was no further activity after the dehydration before it faulted (in a single server BizTalk group); or

    • Rehydrated to a load balanced BizTalk server host instance in the group that had recently unloaded it's AppDomain, had recently been restarted and/or has never loaded the required context property schema;
The criteria above may seem rare but can happen if, for example, a timeout occurs waiting for a long-lived transaction to complete.

The Solution

The solution that was decided upon was to simply reload the AppDomain with all the registered property schema types just before the AppDomain was queried by the ESB. This ensures that all possible context property types could be fished for by the fault handler.

See the code snippet below for the LoadPropertySchemaAssembliesIntoAppDomain method call that was added to the fault helper.

namespace Microsoft.Practices.ESB.ExceptionHandling
/// <summary>
    /// Primary helper class for the exception management system
    /// </summary>
public sealed class ExceptionMgmt
private static void CopyMsgToPartProperties(XLANGMessage msg, XLANGPart destMsg)
// Load all the property schema types in case the host instance has since restarted
            // or we have rehydrated to an AppDomain that hasn't any reference to one or more
            // context property types required
            if (_schemaTypes == null || _schemaTypes.Count == 0)
            assemblies =

// Context Properties: Get the assemblies for property schema types and query the
            // message for those context properties to serialise

/// <summary>
        /// Caches a collection of property schemas obtained using BizTalk Explorer OM.  The idea
        /// behind doing this is to persist all context property schema assemblies in the AppDomain
        /// so they can be queried and matched to message properties of faulted message instances
        /// in <see cref="CopyMsgToPartProperties"/>.
        /// </summary>
private static void LoadPropertySchemaAssembliesIntoAppDomain()
if (_schemaTypes == null || _schemaTypes.Count == 0)
Type> schemaTypes = new List<Type>();
using (BizTalkQuery operationsQuery = new BizTalkQuery())
bool propertySchemasOnly = true;
Collection<BTSchema> propertySchemas = operationsQuery.Schemas(propertySchemasOnly);
foreach (BTSchema propertySchema in propertySchemas)
Trace.TraceInformation("{0}: {1}", MethodInfo.GetCurrentMethod(), propertySchema.AssemblyQualifiedName);
Type propertySchemaType = Type.GetType(propertySchema.AssemblyQualifiedName, false, true);
                    _schemaTypes =
new List<Type>(schemaTypes);
catch (System.Exception ex)
EventLogger.Write(MethodInfo.GetCurrentMethod(), ex);

Note that the BizTalkOperations service was referenced to query the group's property schema collection. The BizTalkOperations.BizTalkQuery constructor actually instantiates Microsoft.BizTalk.Operations.BizTalkOperations.BizTalkOperations which in turn calls a set of BizTalkMgmt database stored procedures! BizTalkOperations.BizTalkQuery had to be changed to lazy load the BizTalkOperations helper instead, otherwise, the host instance needed BizTalkOperator rights which didn't seem kosher.

If you have come across this issue yourself and have solved it another way I'd be interested in hearing your solution.

Tuesday, April 20, 2010

WCF LOB Adapter Ambient Transaction Suppression

The WCF LOB Adapter SDK literature explains some considerations when developing your own LOB adapters. One of these considerations is performance when invoked by BizTalk because loading the channel and subsequent call to LOB adapters is done within a TransactionScope.

Like the other DAL LOB adapters, for example, the WCF SQL Adapter, it is likely you will want to execute some operations outside of the ambient transaction. This will improve performance and allow you to access target systems that do not support distributed transactions.

To suppress the ambient transaction you will need to:
  1. Implement a UseAmbientTransaction binding property

  2. Wrap calls to target systems within a new transaction scope
However, the trick to instantiating the right transaction scope is in using one of the overloaded constructors that takes a TransactionScopeOption. The option you will need to suppress the ambient transaction is, you guessed it, 'Suppress'. The option to enlist in the ambient transaction is simply the 'Required' enum.

bool useAmbientXact = this.Connection.ConnectionFactory.Adapter.UseAmbientTransaction;
TransactionScopeOption xactOption = (useAmbientXact ? TransactionScopeOption.Required : TransactionScopeOption.Suppress);
using (TransactionScope xactCoOrdinator = new TransactionScope(xactOption))
// Execute target system commands here...

Thursday, March 11, 2010

BizTalk ESB Scatter Gather Framework

I have recently been working on a BizTalk ESB project which uses the scatter gather pattern heavily. One of the key requirements was to execute itineraries in parallel. This thread is an overview of the framework written to fit the requirement.

The scatter gather model chosen was taken from the ESB sample but extended to allow advanced correlation, abstraction of the broker and dispatcher components and fault management.

Walking the diagram above (left to right) the following components are invoked:

Abstract Broker

This component is abstract in that more than one implementation can fill the broker role. The broker role being the primary distribution and subsequent aggregation of messages. Examples of brokers that are useful are:
  1. Debatching - Use a pipeline to debatch individual messages within an envelope and distribute to a dispatcher

  2. Multi-endpoint - Loop through a resolver collection distributing one message to multiple different dispatchers

  3. Sequential - A debatching broker that waits for a gather response before scattering the next message
Each broker adds it's own features to or re-organisation of common steps/shapes to implement the behaviour required.

Dispatch Correlator

The correlator is the glue between the broker and dispatcher. It does the following:
  1. Rewrap the dispatch correlation request payload in a dispatch request

  2. Write the following into the message context of the dispatch request: Thread ID, Dispatch Type and an IsInScatterGather boolean

  3. Relay the message to the message box after initialising a correlation set on the thread ID and dispatch type

  4. Relay the response received from the dispatcher, by following the correlation set, back to the broker
The message context values written in step 2 are used by the Fault Manager, explained later.

Abstract Dispatcher

Dispatchers are tightly coupled to an endpoint but loosely coupled to the broker. An implementation must always respond to the dispatch correlator to prevent orphaned processes. Examples of dispatchers that are useful:
  1. Itinerary - Uses the dispatch address to locate the itinerary to stamp to the message. An Itinerary Callback orchestration is paired with this dispatcher to callback into the dispatch correlator and must be configured as the final step in the itinerary that is spawned

  2. Loopback - Simply returns the message received. Used to allow the broker to relay the request in the aggregated response
Dispatchers must ensure that exceptions are handled by either responding to the dispatch correlator directly or indirectly by publishing a fault message with a copy of the context properties from the inbound message.

Fault Manager

The fault manager subscribes to routed failed messages (FMR) and checks the message context for the IsInScatterGather boolean set by the dispatch correlator. If the context property is available and it's true then a dispatch correlation response is constructed to indicate a fault has occurred before being written to the message box.


There are a few important points to note about the model above:
  1. No delays are implemented in the framework to 'protect' it from orphaning threads

  2. Exception handling through the stack must be bullet-proof, period

  3. Contrary to typical ESB service development, strongly typed messaging between the components in the framework is key
The framework is extremely flexible. When using the itinerary dispatcher the ESB is transformed from a serial to a parallel bus. Also being explored at the moment is using the framework as a platform for the resequencer pattern.

Tuesday, August 18, 2009

Upgrading A Clustered ENTSSO Master Secret Server

It is best practice to cluster the Enterprise Single Sign-On Master Secret Server (ENTSSO MSS) on the SQL Server tier to provide high availability.

There is plenty of information available to guide us in setting up a clustered ENTSSO MSS. However, there is no documentation out there on upgrading a clustered ENTSSO MSS, for example, when moving to BT2K6 R2 an upgrade of SSO is recommended. So, how do you upgrade a clustered ENTSSO MSS?

Quite simply, to realise a successful upgrade of the ENTSSO MSS service you must uncluster the ENTSSO MSS first. The following steps provide a little more guidance:

  1. Prepare the environment

    1. Silence the BizTalk environment by stopping

      • Host instances

      • Rule engine services

      • ENTSSO slaves of the ENTSSO MSS

      • and disabling BizTalk SQL Agent maintenance jobs

    2. Backup the Master Secret

    3. Backup the SSO database

      • To retain a consistent state across all BizTalk databases it is advisable to use the Backup BizTalk Server (BizTalkMgmtDb) SQL Agent job provided. You can force a full backup of all BizTalk databases by simply updating the record in the BizTalkMgmtDb.dbo.adm_ForceFullBackup table to 'True'. Then, execute the BizTalk Server (BizTalkMgmtDb) SQL Agent job manually

  2. Uncluster ENTSSO MSS

    1. Offline the ENTSSO MSS clustered resource

    2. Make the primary cluster node the MSS instead of the cluster by moving the master secret server

    3. Restart ENTSSO on the primary node through the SCM not the cluster

    4. Point the secondary cluster node ENTSSO services at the primary node for the master secret (ie) demote the secondary cluster nodes to slaves, and restart the services using the SCM

  3. Upgrade the primary and secondary cluster node ENTSSO services individually

  4. Recluster the ENTSSO MSS

    1. Make the primary cluster node reference the MSS cluster name by moving the master secret server

    2. Bring the ENTSSO cluster resource online on the primary cluster node

    3. Promote the other cluster node ENTSSO services to participate as master secret servers using ssomanage -serverall clustername

    4. Test the ENTSSO upgrade by failing over to each node and browsing the BizTalk adapter properties via the BizTalk server Administration Console (for example)
That's all there is to it ;o)

Wednesday, June 24, 2009

Large Message Persistence In BAM

BAM is all about interchange transparency. Whether you're looking for operational instrumentation, business process monitoring, exception logging, messaging trend analysis, data aggregation or as a nonrepudiation database BAM, in my opinion, should be your weapon of choice.

The EventStream API is the most powerful of all the interfaces into BAM. The buffered, direct, orchestration and pipeline (messaging) variants offer support for performant, race condition and transactional applications, respectively.

Accomodating large message persistence to a BAM data store is afforded but with restriction.

To explain, EventStream.AddReference has an overload which facilitates large message persistence. However, it accepts a System.String parameter for 'long reference data'. BAM documentation reads that this method has a reference data limit of 512KB but the underlying SQL data type is national text (ntext). One would assume that the 512KB limitation is simply advisory because larger messages can be written through this interface.

It is appropriate now to quantify 'large messages'. A governed SOA environment will limit or eliminate interchange of messages > 50MB through BizTalk by either choosing another technology or provisioning BizTalk endpoint contracts to optimise efficiency. 95% of large messages can be transacted through EventStream.AddReference but the other 5% of large messages can often cause system resource issues. This thread focuses on that 5%.

A common response by solution teams to high system resource consumption by their product is to add more processors or RAM to the BizTalk server(s). This is justified in some cases, however, frequently this is not appreciated by Operations or the business and may actually require more than just simply adding hardware, for example, BizTalk licencing extensions or an OS upgrade to accomodate more than 4GB RAM.

.Net 2.0 to the rescue, the System.IO.Compression namespace in particular. The GZipStream class to be more specific.

BizTalk is inherently stream-based. Seeing that AddReference only accepts strings it is important to also remember the 'acquire late, release early' programming tenet to reduce the foot print of the message persistence process. XLangPart.RetrieveAs(typeof(Stream)) is a powerful method call and leverging this for applying message compression is the crux of being able to successfully write large messages through the BAM API.

When retrieving the message from BAM the DocumentUrl reference type can be used to include a compression state indicator so your custom message viewer can easily identify how to render the message.

BAMManagementService.GetReferences can be called to retrieve the stored message from the BAMPrimaryImport database. Unfortunately, to leverage this webmethod you will need to encode the compressed stream before writing it into BAM otherwise the SOAP request will fail with a hexadecimal character fault when the response is constructed. Base64 encoding and decoding is fast but expensive in that it adds 33% expansion to the compressed byte stream. A message size compression threshold might be useful here to save unnecessary expansion and processing overhead for smaller messages. To reduce strain on the BAM portal when transmitting large messages back to the client one can use a file download threshold to decide whether to stream the decompressed message.

No code samples this time.


Thursday, April 30, 2009

Debugging BizTalk IL With Reflector

I've been using Reflector quite a bit lately to analyse BizTalk artifacts. Have also been talking to a colleague recently about IL debugging to work out how BizTalk works under the covers.

Recently I found Deblector which allows IL debugging in Reflector. This is a really cool combo. John Flanders wrote about debugging orchestration IL back in '04 but Deblector reveals the IL of the framework and dependencies too without any prep.

BizTalk's Orchestration Debugger is a nice tool and great 95% of the time. If you need something more substantial I suggest you have a look at the Deblector add-in.

Below is some instruction on setting up Deblector and an example of finding some message content between a transform shape and an assign shape (something that cannot be done in the BizTalk Orchestration Debugger).

Setup Deblector

  1. Get the latest version of Reflector

  2. Download and unzip DeblectorAddin-1.01-Alpha

  3. In Reflector go to View->Options->Browser

  4. Select 'Automatically Resolve Assemblies' and Visbility = 'All Items'

  5. Then go to View->Options->Add-Ins

  6. Click Add and choose DeblectorAddIn.dll from the unzipped Deblector folder

  7. Choose Tools->Deblector to start the add-in

Start Debugging

  1. Load the assembly to debug into Reflector

  2. Choose the 'Attach to process' toolbar button and choose the right BTSBTSvc.exe process

  3. Select the orchestration segment (aka scope) to debug and the line of IL to break on. Not all the IL can be traversed using the debugger so you may have to set a breakpoint block until you get used to the UX. I have found that the value set operation (stfld) and target instruction transfers (brfalse, brfalse_s, brtrue, brtrue_s) seem to be good breakpoints. Note that a breakpoint can only be set while the process is paused

  4. Click Run and send your message to start the orchestration

  5. When the breakpoint is hit Reflector will highlight the breakpoint and you can then query the locals, use the Shell commands and step through the code base. You'll also see that the orchestration instance is marked as Active in the BizTalk admin console

  6. To end the debug session click the Deblector Stop button but note that the host instance you attached to will be stopped too

Transformation Debug Example

In this example Message1 is transformed to Message2 and then a distinguished property in Message2 is assigned a value.

When the breakpoint is hit the Auto window in the debugger can be used to view each of the variables in scope.

If the property assignment was failing (for example) then the Message2 read buffer could be queried to determine Message2's content before the assignment. The read buffer is held as a byte array so this needs to be printed through the debugger Shell window and then converted to a string to view the Xml. The print statement in the case above follows:

print local_3.__Message_2._parts._items[0].__valueToken._value._rewriter._state._readCache._source._streamFactory._sourceStream._buffer

The printed text can then be converted using some code similar to that listed here:

List<byte> bytes = new List<byte>();
List<string> strings = new List<string>();
new string[] {"\r\n"}, StringSplitOptions.RemoveEmptyEntries));
foreach (string item in strings)
List<string> debuggerText = new List<string>();
new char[] {' '}, StringSplitOptions.RemoveEmptyEntries));
string bytesAsString = Encoding.ASCII.GetString(bytes.ToArray());
textBox2.Text = bytesAsString;

Saturday, April 18, 2009

Mocking & Stubbing BizTalk Map Extension Objects

If you're a red/green TDD developer, BizTalk can be frustrating. There are some cool frameworks available but nothing beats the ability to test a unit of work in isolation without having to plumb-up peripheral services. BT2K9 goes some way to appeasing the masses but testing maps in isolation is not afforded out of the box.

Why would you want to isolate BizTalk maps from extension objects? Answer: because the time spent in setting up and tearing down external resources (eg) databases or config stores, to ensure your tests pass every time can be prohibitive and test results may vary between environments. This can be read as 'a waste of time' or 'burning cash' for comparatively little reward.

So, we have to work within the framework of the BizTalk object model. No problem, let's do it...

The trick in creating unit testable maps with mock extension objects is in leveraging the TransformBase object model. In particular the XsltArgumentListContent property. Parameters and objects can be passed into transformations from .Net. The meat in mocking extension objects stems from this affordance.

To explain further, the construct reference below will be familiar:

// Execute the map
mapInstance.Transform.Transform(xpath, mapInstance.TransformArgs, mapResultWriter);

What the test (or framework) needs to do is new up an XsltArgumentList object that hosts your mocks and stubs and use that instance in place of the instance from TransformBase.TransformArgs.

Apart from the fact that you need to build an XsltArgumentList object you need a facade to host your mock object because the dynamic mock cannot, obviously, be late bound by the transform at runtime. Note that a concrete implementation (ie) the facade, is required because the transform tries to resolve the class name of the extension object at runtime. As such, the facade must relay the request to this mock verbatim. Easy. An example follows:

public class FacadeDateTimeHelper: IDateTimeHelper
private IDateTimeHelper _mock;

public FacadeDateTimeHelper(IDateTimeHelper mock)
        _mock = mock;

/// <summary>
    /// Delegate the call to the mock verbatim
    /// </summary>
    public string Format(string target, string inputFormat, string outputFormat)
return _mock.Format(target, inputFormat, outputFormat);

Something to note is that the facade in the example above implements an interface. You may think this implies that the real extension object is not your traditional static BizTalk helper. You'd be right! But this doesn't mean that static method call expectations can't be tested in exactly the same manner. Cool.

Now that the facade is in place we need to consider whether we'll ALWAYS want to mock ALL the extension objects in the argument list. If not, there needs to be some smarts built into the arg list builder to preserve the other extension objects but replace the required objects with our own mock facade(s). The extension objects are referenced using name-value pairs where the name is the namespace in the map's raw Xslt and the value is the related extension object instance. This namespace commonly takes the format where n is the zero-based index of the extension object.

In the code example below the user of the Replace method must construct a Dictionary specifying the namespace of the extension object to replace in the existing argument list and the mock facade instance to replace it with. The realXtensions parameter below is constructed by deserialising the TransformBase.XsltArgumentListContent property. Note that you can use xsd.exe to build an ExtensionObjects class from the raw Xml in the XsltArgumentListContent property. The ExtensionObject Xml can also be obtained by 'validating' the BizTalk map at design time.

/// <summary>
Substitute existing arguments with object references provided using the namespace as a matching key
/// </summary>
<param name="realXtensions">Complete extension object collection to be fully or partially replaced</param>
<param name="replacementArgList">Substitution objects</param>
<returns>A recompiled extenstion object collection to be used when mapping</returns>
public static XsltArgumentList Replace(ExtensionObjects realXtensions, Dictionary<string, object> replacementArgList)
XsltArgumentList newArgList = new XsltArgumentList();
foreach (ExtensionObjectsExtensionObject xtension in realXtensions.Items)
string ns = xtension.Namespace;
if (replacementArgList.ContainsKey(ns))
            newArgList.AddExtensionObject(ns, replacementArgList[ns]);
string assemblyName = xtension.AssemblyName;
string className = xtension.ClassName;

ObjectHandle handle = Activator.CreateInstance(assemblyName, className);
object xtensionObject = handle.Unwrap();
            newArgList.AddExtensionObject(ns, xtensionObject);
return newArgList;

Your BizTalk map test framework will then leverage the new XsltArgumentList object like this:

/// <summary>
Execute the mapping procedure and determine if the output is as expected
/// </summary>
If tester has provided an Xslt extension object collection then replace the map's
/// default instances using the script namespace to resolve
/// </remarks>
<typeparam name="T">BizTalk map type</typeparam>
<param name="input">Map source</param>
<param name="expectedOutput">Expected map output</param>
<returns>Success result with actual output on failure</returns>
public MapResult Map<T>(Stream input, Stream expectedOutput) where T : TransformBase
XPathDocument xpath = new XPathDocument(input);
TransformBase mapInstance = Activator.CreateInstance<T>();
XsltArgumentList transformArgs;

// Substitute the external assemblies?
    if (_substituteXtensions.Count == 0)
        transformArgs = mapInstance.TransformArgs;
string args = mapInstance.XsltArgumentListContent;
ExtensionObjects xtensions = XmlHelper.Deserialize<ExtensionObjects>(args);
        transformArgs =
XsltArgListBuilder.Replace(xtensions, _substituteXtensions);

using (MemoryStream mapResult = new MemoryStream())
using (XmlWriter mapResultWriter = XmlHelper.FormattedXmlWriter(mapResult))
// Execute the map
            mapInstance.Transform.Transform(xpath, transformArgs, mapResultWriter);
            mapResult.Position = 0;
return _comparer.Execute(mapResult, expectedOutput);

Your unit test method will use the test framework similar to this:

/// <summary>
Mock the helper class to test expectations
/// </summary>
public void CanMapInput_To_Output_WithExtensionObjects()
// Set up the test resources
    string sourceFile = "Input_To_Output_WithExtensionObjects.Source.xml";
string expectedFile = "Input_To_Output_WithExtensionObjects.Expected.xml";
Stream source = ResourceHelper.GetEmbeddedResource(_assembly, sourceFile);
Stream expected = ResourceHelper.GetEmbeddedResource(_assembly, expectedFile);

// Set the expectations
    Mockery mockFx = new Mockery();

IDateTimeHelper dtMock = mockFx.NewMock<IDateTimeHelper>();
FacadeDateTimeHelper dtFacade = new FacadeDateTimeHelper(dtMock);
"31/03/2009 2:1:9", "dd/MM/yyyy H:m:s", "yyyy-MM-dd")
"31/03/2009 2:1:9", "dd/MM/yyyy H:m:s", "yyyy-MM-ddTHH:mm:ss.fff")

// Inject the mocks
    Dictionary<string, object> mockHelpers = new Dictionary<string, object>();
String.Format(MapTester.SCRIPT_OBJ_NS_FORMAT, 0), dtFacade);
MapTester tester = new MapTester(mockHelpers);

// Execute the test
    MapResult result = tester.Map<Input_To_Output_WithExtensionObjects>(source, expected);

// Verify expectations
MapTester.AssertSuccess(result, sourceFile, expectedFile);

If we walk the test method above we can see that the sample input and expected output documents are retrieved from the embedded resource cache in the test fixture assembly. The input is well-known and the test reflects it's profile. The mocking framework (NMock 2.0 in this example) is then loaded and expectations are set against the dynamic mock of the IDateTimeHelper interface which has been injected into the facade. The test framework is then called to substitute the extension objects and execute the map. The mocking framework is then queried to ensure all expectations were met and the actual & expected output equality assertion is made.