Nov 272009

I’ve been talking about Geneva for a long time. I got the basics down earlier in the year. I tried to come up with my own set of sample apps, but failed to get anywhere. With the official release, and renaming to Windows Identity Foundation (WIF), I have renewed inspiration.

I read Michele Leroux Bustamante’s MSDN magazine article, Claim-Based Authorization with WIF, last night. After reading the article, I was confident that I could get a claims-aware WCF service stood up with a custom STS in a matter of hours. Today I downloaded and installed WIF. I also installed the WIF SDK and all of the prerequisite hotfixes. I perused the readme files and looked through some of the samples code. Everything is layed out sensibly, the samples are commented sufficiently, and the samples include setup and cleanup batch scripts when necessary.

The samples include:

Quick Start

  1. Simple Claims Aware Web Application
  2. Simple Claims Aware Web Service
  3. Simple Web Application With Information Card SignIn
  4. Simple Web Application With Managed STS
  5. Claims Aware Web Application in a Web Farm
  6. Using Claims In IsInRole

End-to-end Scenario

  1. Authentication Assurance
  2. Federation For Web Services
  3. Federation For Web Applications
  4. Identity Delegation
  5. Web Application With Multiple SignIn Methods
  6. Federation Metadata


  1. Claims Aware AJAX Application
  2. Convert Claims To NT Token
  3. Customizing Request Security Token
  4. Customizing Token
  5. WSTrustChannel
  6. Claims-based Authorization

All of the samples I’ve run through so far are great. The only thing that I’m not in love with is all the XML required to wire this stuff up. Maybe some Juval-style extensions would make it less painful.

One more thing – it looks like all of the XP users will finally have to upgrade. WIF only works with Vista, Win7, and Win2008. I heard that Win2003 compatibility will arrive in December.

Download Windows Identity Foundation

Download Windows Identity Foundation SDK

Oct 292009

Using the NetTcpBinding on a WCF service is secure by default. Unless you override the default settings, you will enjoy Transport Security using Windows authentication and the EncrpytAndSign protection level. When you create a new WCF service library, Visual Studio creates a config file with the following identity block:

   24           <identity>

   25             <dns value=”localhost“/>

   26           </identity>


If you wipe this config file clean like me to write a much cleaner and shorter config file, this identity block is the first thing to go. Sadly, most people also add a binding configuration with <security mode=”None”/>. I have done this too in an Intranet environment. The samples and book examples out there don’t show how to write an actual production environment service that cares for different machines in the same domain. While the default settings work when testing on your local machine, they don?t work in a simple Intranet environment.

Most of the difficulty I experienced when starting to work with WCF was getting security to work with the TCP binding. Everything worked so easily during development, but everything broke down once deployed to the development server. It didn’t help that the only errors I saw were timeout exceptions. If I had known about the Service Trace Viewer, I could have easily determine the cause and Googled (Bing wasn’t around then) for a solution. Instead, I chose the easier (and much less secure) way out; rely on my firewall and turn security off.

As mentioned before, the NetTcpBinding is secure by default with transport security using Windows authentication. The problem most experience when moving the service to a different machine is caused by NT authentication failing. If you use svcutil to generate your client config file and your host doesn’t have the identity block mentioned above, svcutil will not add a key piece of information to the client config file. The missing element is, you guessed it, the identity block. Without it, you will likely get an exception and see a stack trace similar to this:

[System.ServiceModel.Security.SecurityNegotiationException: A call to SSPI failed, see inner exception.]

[System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception.]

[System.ComponentModel.Win32Exception: The target principal name is incorrect.]

If you add tracing to your client, you will see that without specifying an identity block WCF will make the call with a DNS identity set to the name of the host. Notice the blue arrows.


You can see that the EndpointReference does not have an <Identity> block. Without that identity block, WCF cannot create a valid ServicePrincipalName. You can find this in Reflector, following this path:

  • System.ServiceModel.Channels.WindowsStreamSecurityUpgradeProvider+WindowsStreamSecurityUpgradeInitiator.OnInitiateUpgrade() – This is where the SecurityNegociationException is being thrown.
  • System.ServiceModel.Channels.WindowsStreamSecurityUpgradeProvider+WindowsStreamSecurityUpgradeInitiator.InitiateUpgradePrepare() – This method populates an EndpointIdentity and ServicePrincipalName to be used immediately after for NT authentication.


When the identity is not specified, it falls back to trying to create an SPN from the host address. I have seen this work on a machine that has two DNS names, using the DNS name that does not match the NETBIOS or AD name for the machine. I’m not exactly sure why that works.

Having any of the following identity blocks in your client config file will cause WCF to take the first path that successfully creates an SPN needed to perform NT authentication in the AuthenticateAsClient method called from OnInitiateUpgrade():

  • <dns value=”serviceHostName”/>
  • <dns/>
  • <servicePrincipalName value=”domainhostServiceUserAccount”/>
  • <servicePrincipalName/>

Having these <Identity> settings in your client config file adds the appropriate <Identity> settings in the <EndpointReference> used when opening the channel.


Security seems more mysterious when going rogue and writing your own config files. If you go rogue, make sure you use the appropriate <Identity> blocks. With this mystery solved, <security mode=?None?/> is a thing of the past. Now we can keep our services secure in an Intranet environment.

Oct 142009

Web services are just the tip of the iceberg in WCFI was privileged to attend the IDesign WCF Master Class last week. It only comes to the USA one time each year, and is presented by the one and only Juval Lowy. The class is held at the training center on the Microsoft Silicon Valley campus in Mountain View, CA. Five very intense days of WCF covering all aspects of WCF from essentials like the ABCs to the most intricate details about advanced topics like concurrency, security, transactions, and the service bus.

What we’ve been told sold about WCF from Microsoft is truly just the tip of the iceberg. Juval presents countless examples that prove WCF is not just about web services. WCF is the evolution of .NET, providing world-class features that no class should ever be without.

Demos, samples, and labs are presented using .NET 3.5 and 4.0 with an emphasis on the new features and functionality in 4.0. Discovery and announcements are the most underrated and unknown new features of WCF 4.0. After seeing Juval’s demos on discovery and announcement, I can’t imagine creating services without them.

More than all of the WCF content, the class gives you a lot to think about regarding architecture, the framework, and engineering principles. Juval’s mastery of .NET is evident in his ServiceModelEx library that extends almost all aspects of WCF and the service bus. His “one line of code” motto makes it possible for all of us to configure our WCF services with ease. The ServiceModelEx library is a good example for all developers to know and understand how to “do .NET” the right way. It exemplifies the best of what .NET and WCF have to offer.

Check out the IDesign website to get the WCF Resource CD (containing many of the examples and demos from the class). Also note the next class dates and sign up for the IDesign newsletter.

Sep 032009

The 2009 Jacksonville Code Camp was a great success. Many thanks to Bayer, Brandy, and everyone else that made it happen. The bar has been set really high for future Jacksonville code camps, and for the rest of Florida too.

My session on Transactional WCF Services went well. Many great questions and compliments after the session. If you attended and have any unanswered questions, please email me.

You can download the session files below. It contains staged versions of all of the transaction modes we discussed. It also contains a tracing solution and tracing result files to view the client and host tracing files in Client/Service mode. Also see my previous post on using the Service Trace Viewer. It also contains a few demo projects that we didn’t get to in the one-hour session.

Files/Solutions included in Session Archive:

  • PowerPoint slides
  • Transaction Promotion Code Snippet
  • Testing database backup
  • Testing SQL script (query and cleanup between tests)
  • IDesign ServiceModelEx Project (used by all included Solutions)
  • Code Demo Solutions

Code Demos include:

1. TransactionScope – Shows how single/multiple resource managers affect which Transaction Manager is chosen to handle the scoped transaction. Also gives first look at transaction promotion detection.
2a. Mode None – WCF transaction mode with which no transactions are created or flowed from the calling client.
2b. Mode Service – WCF transaction mode with which no transactions are flowed from the calling client, but a transaction is created for your service operation.
2c. Mode Client – WCF transaction mode with which a transaction is required to be flowed, and the service will only use the client transaction.
2d. Mode Client/Service – WCF transaction mode with which a client transaction will be flowed and used by the service, if available. If no client transaction is flowed, a transaction will be provided automatically for the service operation.
3. Explicit Voting – Shows how explicit voting with a session-mode service is performed using OperationContext.Current.SetTransactionComplete().
4a. Testing Various Resource Managers – Shows how a client can use a single TransactionScope to call several services (some transactional, some non-transactional), a database stored procedure, and an IDesign volatile resource manager Transactional<int>.
4b. Testing Services – Provides a host project for a transactional service and a non-transactional service used in 4a.
5a. Tracing – Same as 2d. modified with the additional app.config settings in the client and host projects to allow for service tracing to .svclog files.
5b. Tracing Results – Stored results from executing 5a. in case you don’t want to load the database and actually run the projects. The .stvproj file can be opened directly in the Service Trace Viewer. On the “Activity” table, click on the activity “Process action ‘http://services/'” then click on the “Graph” tab. You will see that the client and host activities where the arrow moves from client to host (send and receive message, respectively) show the OleTxTransaction in “Headers.” The next activity in the host reads “The transaction ‘5bd25b08-848c-409d-9163-6303b9138382:1’ was flowed to operation ‘SubmitTrack’.”


Download the session files: (854 KB)

Dec 072008

It’s so easy! Start downloading Enterprise Library 4.1 now while you read this. The data application block syntax has not changed much since the first version. The most notable change was allowing us to use System.Data.Common.DbCommand when version 3.0 was released. I understand the uneasy feeling some developers have using Enterprise Library. My team at my previous employer decided not to use it, thinking it would add increased complexity and would not give us the flexibility we needed if we had to change something. This is typical of groups that do not have an established Data Access Library.

Your Data Access Library should be one of the most highly tested libraries in your application. If there is a problem there, you will have issues everywhere. Enterprise Library not only comes with the source code, but also includes the full suite of unit tests for each of the application blocks. You should feel at ease when you decide to migrate to Enterprise Library. Run it through your full battery of tests before you commit the team to it. If you find any problems, check the forums, request changes/enhancements from the MS Patterns & Practices team, or fix it yourself.

The steps to achieve EntLib goodness:

  1. Download Enterprise Library
  2. Add reference to “Enterprise Library Data Access Application Block” and “Enterprise Library Shared Library”
  3. Change your app.config or web.config
  4. Write some much more readable data access code

I’ll start at step 3 as steps 1 and 2 are self-explanatory. Your connection string needs to be in you app’s config file, the machine.config file, or in a connectionStrings.config file referenced in those config files. You can start using it just by adding the <configSections> clock and the <dataConfiguration> node. This will allow you to have one default database for all commands you will execute.

<?xml version=”1.0encoding=”utf-8“?>



        <section name=”dataConfigurationtype=”Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35” />


    <dataConfiguration defaultDatabase=”Testing” />


        <add name=”TestingconnectionString=”server=Server_Name;database=DB_Name;Integrated Security=true;

                  providerName=”System.Data.SqlClient” />



By the time you get to step 4, you have all of the infrastructure in place. Painless so far, let’s see how steep the learning curve is.

With ADO.NET, you would write:

  116 string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

  117 using (SqlConnection con = new SqlConnection(connectionString))

  118 using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

  119 {

  120     cmd.CommandType = System.Data.CommandType.StoredProcedure;

  121     cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

  122     cmd.Parameters.AddWithValue(“UserID”, 5150);

  123     try

  124     {

  125         con.Open();

  126         cmd.ExecuteNonQuery();

  127     }

  128     finally

  129     {

  130         con.Close();

  131     }

  132 }

With Enterprise Library, you write:

  170 Database db = DatabaseFactory.CreateDatabase();

  171 DbCommand cmd = db.GetStoredProcCommand(“usp_ErrorLog_Insert”);

  172 db.AddInParameter(cmd, “Message”, System.Data.DbType.String, “Testing 1”);

  173 db.AddInParameter(cmd, “UserID”, System.Data.DbType.Int32, 5150);

  174 db.ExecuteNonQuery(cmd);


Line 170 creates the database object. This is the hardest thing to get used to. You call everything related to the Database object. In ADO.NET, we are used to creating a connection, adding the connection to a command, using the command in an adapter. Here you’ll always be using the Database object to create a command, add parameters to the command, execute the command, fill a DataSet, etc. It is definitely less code to write, but it is also more readable and elegant.

If you have a database to execute commands against other than the defaultDatabase specified in the config file, then the first line changes to:

  170 Database db = DatabaseFactory.CreateDatabase(“OtherConnectionStringKey”);


That’s it. The patterns & practices team has really done a nice job making it painless to use Enterprise Library. Take the time to try it out again if you reviewed a previous version. I reviewed 2.0, and chose not to use it. When 3.0 came out, I was hooked.

Dec 052008

Unless you are working on a extremely simple or read-only application, transactions are a must. Using the System.Transactions namespace is the easiest and most efficient way to maintain system consistency when dealing with multiple calls or multiple resources. Although System.Transactions arrived in .NET in the 2005 product, it is still a relatively unknown part of the framework. System.Transactions was designed by the Indigo team in preparation for WCF. It is not compatible with Win98 or WinME, but most people are incompatible with Win98 and WinME so it works out just fine.

Before System.Transactions, we only had access to System.Data.SqlClient.SqlTransaction or a true SQL transaction using BEGIN/ROLLBACK/COMMIT TRAN. Using SQL transactions, you are stuck with only being able to update DB records as part of your transaction. If you wanted to change a cached value in your app in addition to the SQL updates in the same transaction then you would be out of luck. This also required a lot of transaction code in your stored procedures, writing stored procedures that can be called independently or part of transaction made for very messy stored procedures and often led to multiple stored procedures that served the same purpose.

Using the SqlTransaction class was also messy. The most important restriction is that you need to have all database calls inside the same SqlConnection. This does not work well for a well-designed N-tier application. The other problem is that you need to handle your own non-DB transaction logic inside the same SqlTransaction and conditionally commit/rollback as necessary. This all tends to lead to several try-catch statements within one SqlTransaction. Handling the plumbing to manually add each SqlCommand to the transaction gets old quickly too.

Using SqlTransaction

   27 string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

   28 using (SqlConnection con = new SqlConnection(connectionString))

   29 {

   30     SqlTransaction tran = null;

   31     try

   32     {

   33         con.Open();

   34         tran = con.BeginTransaction();

   35         using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   36         {

   37             cmd.Transaction = tran;

   38             cmd.CommandType = System.Data.CommandType.StoredProcedure;

   39             cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

   40             cmd.Parameters.AddWithValue(“UserID”, 5150);

   41             cmd.ExecuteNonQuery();

   42         }


   44         using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   45         {

   46             cmd.Transaction = tran;

   47             cmd.CommandType = System.Data.CommandType.StoredProcedure;

   48             cmd.Parameters.AddWithValue(“Message”, “Testing 2”);

   49             cmd.Parameters.AddWithValue(“UserID”, 5150);

   50             cmd.ExecuteNonQuery();

   51         }


   53         tran.Commit();

   54     }

   55     catch

   56     {

   57         if (tran != null) tran.Rollback();

   58     }

   59     finally

   60     {

   61         con.Close();

   62     }

   63 }

System.Transactions liberated us from the mundane SqlClient code and repetitive try-catches. simply wrapping your old ADO.NET with a using (TransactionScope) { } is all you need to do. You will typically add a transactionScope.Complete() statement as the last line in the TransactionScope using block is really all you need. Any exception thrown before this point will break out of scope, implicitly aborting the transation. Much better.

System.Transactions uses the LTM (Lightweight Transaction Manager) when dealing with single resources or machines. The transaction is automatically promoted to MSDTC (Microsoft Distributed Transaction Coordinator) when another resource is enlisted in a transaction. A lot of people struggle with MSDTC because it is difficult to setup, requires special firewall considerations, and doesn’t really work well for smart client applications since you have to install MSDTC on every client machine.

I’ll show one transaction performed three different ways and show what happens with the LTM and MSDTC for each of them. I will also demonstrate an excellent reason to migrate to Enterprise Library.

1) Executing two ADO.NET SqlCommands in different SqlConnections

  122 using (TransactionScope scope = new TransactionScope())

  123 {

  124     string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

  125     using (SqlConnection con = new SqlConnection(connectionString))

  126     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

  127     {

  128         cmd.CommandType = System.Data.CommandType.StoredProcedure;

  129         cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

  130         cmd.Parameters.AddWithValue(“UserID”, 5150);

  131         try

  132         {

  133             con.Open();

  134             cmd.ExecuteNonQuery();

  135         }

  136         finally

  137         {

  138             con.Close();

  139         }

  140     }


  142     Console.WriteLine(“Local Transaction ID: {0}”,

  143         Transaction.Current.TransactionInformation.LocalIdentifier);

  144     Console.WriteLine(“Distributed Transaction ID: {0}”,

  145         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());


  147     using (SqlConnection con = new SqlConnection(connectionString))

  148     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

  149     {

  150         cmd.CommandType = System.Data.CommandType.StoredProcedure;

  151         cmd.Parameters.AddWithValue(“Message”, “Testing 2”);

  152         cmd.Parameters.AddWithValue(“UserID”, 5150);

  153         try

  154         {

  155             con.Open();

  156             cmd.ExecuteNonQuery();

  157         }

  158         finally

  159         {

  160             con.Close();

  161         }

  162     }


  164     Console.WriteLine(“Local Transaction ID: {0}”,

  165         Transaction.Current.TransactionInformation.LocalIdentifier);

  166     Console.WriteLine(“Distributed Transaction ID: {0}”,

  167         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());


  169     scope.Complete();

  170 }

This writes the following to the command line:

Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:2
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000
Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:2
Distributed Transaction ID: 1fad8108-ddae-496a-a7da-ce92df175e40

You’ll notice that the first command creates a transaction using LTM as indicated by the Local Transaction ID. After the second command is executed, the transaction is promoted to DTC as indicated by the Distributed Transaction ID. This is expected because there are two distinct SqlConnections. Even though the connection string is the same, TransactionScope treats these ADO.NET objects as unique resources.

This has additional implications when connection pooling comes into play. After I close the first connection, it is returned to the pool and is available for use. If this connection is requested for use, it will no longer be available to commit or abort this transaction, and you will see the dreaded MSDTC error “Communication with the underlying transaction manager has failed.”

2) Executing two ADO.NET SqlCommands in the same SqlConnection

   69 string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

   70 using (TransactionScope scope = new TransactionScope())

   71 using (SqlConnection con = new SqlConnection(connectionString))

   72 {

   73     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   74     {

   75         cmd.CommandType = System.Data.CommandType.StoredProcedure;

   76         cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

   77         cmd.Parameters.AddWithValue(“UserID”, 5150);

   78         try

   79         {

   80             con.Open();

   81             cmd.ExecuteNonQuery();

   82         }

   83         finally

   84         {

   85             con.Close();

   86         }

   87     }


   89     Console.WriteLine(“Local Transaction ID: {0}”,

   90         Transaction.Current.TransactionInformation.LocalIdentifier);

   91     Console.WriteLine(“Distributed Transaction ID: {0}”,

   92         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());


   94     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   95     {

   96         cmd.CommandType = System.Data.CommandType.StoredProcedure;

   97         cmd.Parameters.AddWithValue(“Message”, “Testing 2”);

   98         cmd.Parameters.AddWithValue(“UserID”, 5150);

   99         try

  100         {

  101             con.Open();

  102             cmd.ExecuteNonQuery();

  103         }

  104         finally

  105         {

  106             con.Close();

  107         }

  108     }


  110     Console.WriteLine(“Local Transaction ID: {0}”,

  111         Transaction.Current.TransactionInformation.LocalIdentifier);

  112     Console.WriteLine(“Distributed Transaction ID: {0}”,

  113         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());


  115     scope.Complete();

  116 }

This writes the following to the command line:

Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:1
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000
Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:1
Distributed Transaction ID: becac9c9-e15f-4370-9f73-7f369665bed7

This is not expected because both commands are part of the same connection. Of course I am closing the connection to simulate an N-tier app where the data access layer is maintaining it’s own SQL access, opening and closing its connection as it should. If I did not close the connection, you would not see a Distributed Transaction ID after the second command.

3) Executing two Enterprise Library commands

  176 using (TransactionScope scope = new TransactionScope())

  177 {

  178     Database db = DatabaseFactory.CreateDatabase(“Testing”);

  179     DbCommand cmd = db.GetStoredProcCommand(“usp_ErrorLog_Insert”);

  180     db.AddInParameter(cmd, “Message”, System.Data.DbType.String, “Testing 1”);

  181     db.AddInParameter(cmd, “UserID”, System.Data.DbType.Int32, 5150);

  182     db.ExecuteNonQuery(cmd);


  184     Console.WriteLine(“Local Transaction ID: {0}”,

  185         Transaction.Current.TransactionInformation.LocalIdentifier);

  186     Console.WriteLine(“Distributed Transaction ID: {0}”,

  187         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());


  189     Database db1 = DatabaseFactory.CreateDatabase(“Testing1”);

  190     DbCommand cmd1 = db.GetStoredProcCommand(“usp_ErrorLog_Insert”);

  191     db1.AddInParameter(cmd1, “Message”, System.Data.DbType.String, “Testing 2”);

  192     db1.AddInParameter(cmd1, “UserID”, System.Data.DbType.Int32, 5150);

  193     db1.ExecuteNonQuery(cmd1);


  195     Console.WriteLine(“Local Transaction ID: {0}”,

  196         Transaction.Current.TransactionInformation.LocalIdentifier);

  197     Console.WriteLine(“Distributed Transaction ID: {0}”,

  198         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());


  200     scope.Complete();

  201 }

This writes the following to the command line:

Local Transaction ID: 6737b756-2d5b-4eff-902d-15f9ccd5c26f:3
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000
Local Transaction ID: 6737b756-2d5b-4eff-902d-15f9ccd5c26f:3
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000

Whoa! How cool is that? No DTC promotion. Enterprise Library is intelligently deciding to keep the connection open when it is part of the transaction. This will save a lot of wasted time as the promotion to DTC adds a noticeable delay. If I wasn’t using Enterprise Library already, I’d switch now.

Useful links:

Oct 302008

Don’t be so quick to blame the service or MSDTC when you see the error “Communication with the underlying transaction manager has failed.”


An error message that reads something like:

System.Transactions.TransactionManagerCommunicationException: Communication with the underlying transaction manager has failed. —> System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.



“Check your firewall settings” is what you will find in almost all forum posts and on MSDN. You need port 135 open bi-directionally for RPC’s end point mapper (EPM). You also need ports 1024-5000 open bi-directionally if you have not specified your own port settings for RPC in the registry. If you have your own ports specified in the registry, then those need to be open bi-directionally.


WHAT ?!? It may also be your code causing the issue. If you are using TransactionScope, you have to be mindful of every method called within the using braces. Looking at the code below, you will see two service calls and a seemingly innocuous ShouldContinue() method checking to see if the second operation should be called.

using (TransactionScope scope = new TransactionScope())

using (MyServiceClient proxy = new MyServiceClient())




       if (ShouldContinue()) // uh, oh! What if this has an ADO.NET connection that is opened and closed inside it?





If ShouldContinue() opens and closes an SqlConnection, the TransactionScope object has no means by which to commit or rollback this part of the transaction. This will cause the error “Communication with the underlying transaction manager has failed.”


1. If you do not need to results of DoOperationOne() to feed ShouldContinue(), then do that logic before the TransactionScope using block.

2. If you do need the result of DoOperationOne() to feed ShouldContinue(), then you can wrap the internals of ShouldContinue() with a TransactionScope using block specifying TransactionScopeOption.Suppress. This will not add the resource access contained within the block to the ambient transaction.

3. Use an intelligent data access library like Enterprise Library that manages your connections for you. It won’t close the connection if enlisted in a transaction.


Look at your code before you involve your network dudes. This is more common when integrating legacy code with new service calls.

Sep 212008

WCF never ceases to amaze me. Around every corner is another fascinating use for WCF, and much forethought on Microsoft’s part to make it look and behave great. I wanted to expose my services to my AJAX functions on my web site. I did not want to change my class library because it is used by other clients. I could just add the service classes to this web site, but why re-do when you can re-use.

If you have an existing WCF Service Library, you will need to expose it with the AspNetCompatibilityRequirementsMode.Allowed attribute on the service class to make it visible to ASP.NET clients. To avoid changing your service library in any way, the easiest thing to do is to add a new class to your web site that inherits from your service class. In this example, my existing service library uses the JeepServices namespace. Notice there is no implementation in this class. It is simply a placeholder for the real service implementation with the compatibility attribute attached.

    1 using System.ServiceModel;

    2 using System.ServiceModel.Activation;


    4 [ServiceBehavior]

    5 [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]

    6 public class WebHttpService : JeepServices.Service

    7 {

    8 }

Now that I have a ASP.NET compatible service, I need to expose it to the web site clients. Create a service file (.svc), and change the Service and CodeBehind attributes to point to the .svc file. The last thing you need is the Factory attribute. This notifies WCF of this service, eliminating the need for a configuration file entry for the service endpoint. In fact, you don’t even need the <system.servicemodel> in your configuration file at all. This is because it is only hosted as a web script, and cannot be called outside of the web site.

    1 <%@ ServiceHost Language=“C#” Debug=“true” Service=“WebHttpService” CodeBehind=“~/App_Code/WebHttpService.cs”

    2     Factory=“System.ServiceModel.Activation.WebScriptServiceHostFactory” %>


In your web page you will need a few things. First your will need a ScriptManager with a ServiceReference to the .svc file. You will then need the Javascript functions to make the call (DoJeepWork), handle the success message (OnJeepWorkSucceeded), and handle the failure message (OnJeepWorkFailed). Notice in DoJeepWork that you don’t call the service by it’s service name WebHttpService, you call it by the ServiceContract namespace and name. For this example, my interface has ServiceContract attributes Namespace = “JeepServices”, and Name = “JeepServiceContract”. Now you just wire up a ASP.NET control’s OnClientClick or an input or anchor tag’s onclick to DoJeepWork() and you are good to go.

    1 <%@ Page Language=“C#” AutoEventWireup=“true” CodeFile=“Default.aspx.cs” Inherits=“_Default” %>


    3 <!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN”

    4 “”>

    5 <html xmlns=“”>

    6 <head runat=“server”>

    7     <title>Test page</title>


    9     <script type=“text/javascript”>

   10         function DoJeepWork() {

   11             JeepServices.JeepServiceContract.DoWork(OnJeepWorkSuccedeed, OnJeepWorkFailed);

   12         }

   13         function OnJeepWorkSuccedeed(res) {

   14             document.getElementById(“<%= this.lblMessage.ClientID %>”).innerText = res;

   15         }

   16         function OnJeepWorkFailed(error) {

   17             // Alert user to the error.   

   18             alert(error.get_message());

   19         }

   20     </script>


   22 </head>

   23 <body>

   24     <form id=“form1” runat=“server”>

   25     <div>

   26         <asp:ScriptManager runat=“server”>

   27             <Services>

   28                 <asp:ServiceReference Path=“~/Services/WebHttpService.svc” InlineScript=“false” />

   29             </Services>

   30         </asp:ScriptManager>

   31         <asp:Label ID=“lblMessage” runat=“server” Text=“No work has been done” />

   32         <a href=“javascript:void(0); DoJeepWork()”>Do Work</a>

   33     </div>

   34     </form>

   35 </body>

   36 </html>


Mission accomplished! Here you’ve seen how to expose an existing WCF service library without changing any code in the library itself. Adding two files allowed the service to be exposed to your AJAX clients. Best of all, there is no configuration file changes to make.

Useful Links:

Sep 172008

Hosting an MSMQ service is a little bit different than the other bindings. Since WCF is using MSMQ as a transport mechanism, you must setup the queues, permissions, and bindingConfigurations to allow this to happen. Surprisingly, MSDN has a good sample article that goes into sufficient detail on how to set this up for the 3.5 WF-WCF-CardSpace samples.

I have read in other articles that the AppPool must have an interactive identity and that the queue names needed to match the name of the .svc file. I did not find this to be the case. I was able to use the NetworkService account for my AppPool after adding receive and peek permissions for NetworkService on my queue. Communication between client and WAS worked fine with my service file named WasServices.svc and my queue address as net.msmq://localhost/private/QueuedService1.

You can download my solution with the following link: (78K)

Additional Info:

Sep 132008

It has taken me weeks to get WAS (Windows Activation Service) working. Finally, tonight, my long hours of research has paid off. After everything I tried, it turned out to be a general IIS7 issue caused by a stray http reservation that I probably entered months ago during some testing. As I primarily use the built-in development server for web development, I rarely crank up an IIS site on my development machine.

This post by Phil Haack helped me fix my IIS install:

I have been cursing IIS7, Vista, and WAS for weeks. I should have been cursing my own lack of IIS7 knowledge all along. Now that it’s working, I am a big fan of WAS. From the tone of recent forum responses and blog posts, very few people are using WAS. Maybe it is due to Windows Server 2008 being so new. Not many people have Vista workstations for development and all Windows Server 2008 servers to deploy to. Knowing how many problems I had, I can only assume others are experiencing the same thing. The only real info available right now is pre-release articles and MVP posts about the new features with a sneak peak example on how to get it to work. Even MSDN doesn’t show how to use an existing WCF Service Library with WAS. They just walk through a WsHttpBinding example as a new WCF web site served up by WAS.

I’m posting the details so others will maybe see that it’s really not that hard. For this example I want to expose this service with the NetTcpBinding to prove that it is not IIS hosting the service. I used the WCF Service Library project template for my WCF service, and named the project WasServices. So the lame Service1 service is all I have in the library. I made no changes to the project and built it in release mode to get the DLL. Some posts and articles out there say that the only way to get WAS to work is to have an HTTP-based WCF web site. This is simply not true. You just need to have an application set up in IIS.

Here is the steps to success:

1. Enable the required Windows Features to wake up IIS7 and WAS. You will find these in the helpful links below.

2. Configuration file C:WindowsSystem32inetsrvconfigapplicationHost.config must be modified to enable the required protocols on your web site and application. You can modify the file yourself, or use command-line utilities.

To enable net.tcp on the web site, if it is not already:

%windir%system32inetsrvappcmd.exe set site “Default Web Site” -+bindings.[protocol=’net.tcp’,bindingInformation=’808:*’]

To enable net.tcp on your application (my app is named WasServices) within that web site, if it is not already:

%windir%system32inetsrvappcmd.exe set app “Default Web Site/WasServices” /enabledProtocols:http,net.tcp

Here is an exerpt from the applicationHost.config file showing the site and application settings:

  151             <site name=”Default Web Siteid=”1serverAutoStart=”true“>

  152                 <application path=”/“>

  153                     <virtualDirectory path=”/physicalPath=”%SystemDrive%inetpubwwwroot” />

  154                 </application>

  155                 <application path=”/WasServicesapplicationPool=”WasHostingenabledProtocols=”http,net.tcp“>

  156                     <virtualDirectory path=”/physicalPath=”C:inetpubwwwrootWasServices” />

  157                 </application>

  158                 <bindings>

  159                     <binding protocol=”net.tcpbindingInformation=”808:*” />

  160                     <binding protocol=”net.pipebindingInformation=”*” />

  161                     <binding protocol=”net.msmqbindingInformation=”localhost” />

  162                     <binding protocol=”msmq.formatnamebindingInformation=”localhost” />

  163                     <binding protocol=”httpbindingInformation=”*:80:” />

  164                 </bindings>

  165             </site>

3. Prepare the application in your application folder (C:inetpubwwwrootWasServices)

Create a service file (WasServices.svc) that points to your existing WCF service library:

    1 <%@ ServiceHost Service=“WasServices.Service1” %>


Create a web.config file that specifies the service’s endpoints:

    1 <?xml version=”1.0encoding=”utf-8“?>

    2 <configuration>

    3     <system.serviceModel>

    4         <services>

    5             <service name=”WasServices.Service1

    6                     behaviorConfiguration=”MEX“>

    7                 <endpoint address=”wsHttp

    8                           binding=”wsHttpBinding

    9                           contract=”WasServices.IService1“/>

   10                 <endpoint address=”netTcp

   11                           binding=”netTcpBinding

   12                           bindingConfiguration=”NetTcpBinding_Common

   13                           contract=”WasServices.IService1“/>

   14                 <endpoint address=”mex

   15                           binding=”mexHttpBinding

   16                           contract=”IMetadataExchange” />

   17             </service>

   18         </services>

   19         <behaviors>

   20             <serviceBehaviors>

   21                 <behavior name=”MEX“>

   22                     <serviceMetadata httpGetEnabled=”true“/>

   23                 </behavior>

   24             </serviceBehaviors>

   25         </behaviors>

   26         <bindings>

   27             <netTcpBinding>

   28                 <binding name=”NetTcpBinding_Common“>

   29                     <reliableSession enabled=”true“/>

   30                     <security mode=”None“/>

   31                 </binding>

   32             </netTcpBinding>

   33         </bindings>

   34     </system.serviceModel>

   35 </configuration>


Place the release-compiled DLL created from the WCF Service Library in a new folder named Bin.

4. At this point, you can browse and see the familiar “You have created a service.” page for Service1.

5. Write your proxy file and config file.

WAS and IIS7 decide the address for your service, and it is not intuitive.

    1 <?xml version=”1.0encoding=”utf-8” ?>

    2 <configuration>

    3     <system.serviceModel>

    4         <client>

    5             <endpoint address=”net.tcp://localhost/WasServices/WasServices.svc/netTcp

    6                       binding=”netTcpBinding

    7                       bindingConfiguration=”NetTcpBinding_IService1

    8                       contract=”WasServices.IService1

    9                       name=”NetTcpBinding_Common” />

   10         </client>

   11         <bindings>

   12             <netTcpBinding>

   13                 <binding name=”NetTcpBinding_Common“>

   14                     <reliableSession enabled=”true” />

   15                     <security mode=”None” />

   16                 </binding>

   17             </netTcpBinding>

   18         </bindings>

   19     </system.serviceModel>

   20 </configuration>


The /netTcp at the end of the address is due to the address specified in the service’s web.config file. The address was given there as simply netTcp. This is because IIS7 and WAS decide your address based on the available bindings and ports you specified in the applicationHost.config file using appcmd.exe. Since my enabled protocols are http and net.tcp and the only open tcp port is 808, you will not see a port number in the address. The same would go for my wsHttpBinding since the only allowable port is 80.

I’m proud to be the fourth, and maybe final, member of the “Got WAS to work” club. If anyone wants to join, and needs help to get in… please let me know.

Here are some helpful links for those of you having problems: