Mar 312014
 

I had a great time at Global Windows Azure Bootcamp (GWAB) in Jacksonville, FL. I got to meet a bunch of cool people and discuss Azure topics all day. Free food, Bold Bean coffee, and beer helped to create the perfect geekfest atmosphere. I can’t wait for the next Azure event!

I talked about Windows Azure Data Services, and hit on topics such as Tables, Blobs, Queues, Windows Azure SQL Database, and some Cloud Design Patterns. The links below are the slides and code demos from my talk.

GWAB Azure Storage Presentation – Slides

GWAB Azure Storage Presentation – Demo Code

Feb 032012
 

In Part 1: Out-of-the-box Features, I went through some of the great new features with Enterprise Library 5 Data Access including accessors and mappers. Before version 5, most of my EntLib extensions code was in place to perform these new features (not as eloquently, of course). I have become attached to a few of the extensions I had put in place over the years. I will keep my extensions around now for only a few reasons. 1) Customized database exceptions 2) IDataReader usability enhancements 3) Reduced mapping footprint.

Extensions

I typically have a Utils.dll that I import in every project. For data/resource access projects, I also include my Utils.Data.dll. Utils.Data started its career as a data access application block similar to SqlHelper from the pre-EntLib days. Today, Utils.Data is a set of extensions that merely makes Enterprise Library more fun to be with.

IDataReaderExtensions

Out of the box, System.Data.IDataRecord only gives you the ability to access fields by their integer index value. As an architect that does not have supervisory control over the database or the objects within, this scares me. Any additions or re-ordering of the output fields will surely cause your index-based mapping to blow up. You could solve this with a call to .GetOrdinal(fieldName) first to get the index, but that is twice the code (not to mention boring plumbing code). My extensions do nothing novel. They simply provide string-based extensions like .GetInt32(string name) that do the retrieval and casting for you. I also added a few frequently-used new extensions like .GetNullableInt(string name) to keep my result mapping as clean as concise as possible.

Reader use with built-in features:

jeep = new Jeep()
{
	ID = row.GetInt32(0),
	Name = row.GetString(1),
	Description = row.GetString(2),
	Status = row.GetBoolean(3)
};

Reader use with extensions:

jeep = new Jeep()
{
	ID = reader.GetInt32(“JeepID”),
	Name = reader.GetString(“Name”),
	Description = reader.GetString(“Description”),
	Status = reader.GetBoolean(“Status”),
};

I advise that you never use string literals in data access code. Data access code is hit hard, so take your performance improvements when you can. I prefer having const strings locally in my data access class or having an internal static class with const strings to share with all classes in my data access project. The attached solution has examples.

Parameter and Result/Row Mapping

The now built-in ParameterMapper, RowMapper, and ResultSetMapper are beautiful. Sometimes you need a little sumpin’ special to make your code easier to read and work consistently when getting one or ten entities in a database call. Similar to how ExecuteSprocAccessor works with row and result set mappers, CreateObject and CreateCollection support generics and build an object or collection of the specified type. Instead of deriving a new class from a base mapper class, I chose to have one delegate method that generates a single object from a reader. This delegate is used by both CreateObject and CreateCollection. Let’s look at the differences with code.

Creating an object with EntLib5 features:

public Jeep GetJeepByID(int id)
{
	Database db = DatabaseFactory.CreateDatabase();
	IParameterMapper jeepParameterMapper = new JeepParameterMapper();
	IRowMapper<Jeep> jeepRowMapper = new JeepRowMapper();
	IEnumerable<Jeep> jeeps = db.ExecuteSprocAccessor<Jeep>(StoredProcedures.GetJeepByID, jeepParameterMapper, jeepRowMapper, id);
	return jeeps.First();
}

internal class JeepRowMapper : IRowMapper<Jeep>
{
	public Jeep MapRow(System.Data.IDataRecord row)
	{
		return new Jeep()
		{
			ID = row.GetInt32(0),
			Name = row.GetString(1),
			Description = row.GetString(2),
			Status = row.GetBoolean(3)
		};
	}
}

Creating an object with extensions:

public Jeep GetJeepByID(int id)
{
	Database db = DatabaseFactory.CreateDatabase();
	DbCommand cmd = db.GetStoredProcCommand(StoredProcedures.GetJeepByID, id);
	Jeep jeep = db.CreateObject(cmd, GenerateJeepFromReader);
	return jeep;
}

private Jeep GenerateJeepFromReader(IDataReader reader)
{
	Jeep jeep = null;
	if (reader.Read())
	{
		jeep = new Jeep()
		{
			ID = reader.GetInt32(Fields.JeepID),
			Name = reader.GetString(Fields.JeepName),
			Description = reader.GetString(Fields.JeepDescription),
			Status = reader.GetBoolean(Fields.JeepStatus),
		};
	}
	return jeep;
}

One more thing to note is that my CreateObject, CreateCollection, and their GetAccessor equivalents have my customized exception handling logic that makes use of the StoredProcedureException. We’ll go through that now.

Customized and Standardized Exceptions

The only value in logging exceptions is if your entire system logs exceptions and other messages in a consistent and meaningful manner. If error messages are logged as “ERROR!” or “All bets are off!!!” then you shouldn’t bother logging. In the real world, few developers, architects, or support staff have access to production databases. Having meaningful and detailed error messages is key to troubleshooting an issue and meeting your SLAs. I created a simple StoredProcedureException that provides the executed (or attempted) command as part of the stack trace.

WARNING: You should never, ever, ever show the stack trace in your application or let your users see the real error messages.
Log the real message and stack trace, then show “Data access exception” to your users. Please!

 

In the attached code samples, you’ll see two data access methods that call “ExceptionStoredProcedure” that does nothing other than RAISERROR(‘This is an exception’, 16, 1). With the built-in features, you can expect a SqlException and a stack trace that looks like this:

at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) 
   at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) 
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() 
   at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) 
   at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() 
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) 
   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) 
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) 
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) 
   at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) 
   at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) 
   at System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) 
   at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoExecuteReader(DbCommand command, CommandBehavior cmdBehavior) 
      in e:BuildsEntLibLatestSourceBlocksDataSrcDataDatabase.cs:line 460 
   at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteReader(DbCommand command) 
      in e:BuildsEntLibLatestSourceBlocksDataSrcDataDatabase.cs:line 846 
   at Microsoft.Practices.EnterpriseLibrary.Data.CommandAccessor`1.d__0.MoveNext() 
      in e:BuildsEntLibLatestSourceBlocksDataSrcDataCommandAccessor.cs:line 68 at System.Linq.Enumerable.First[TSource](IEnumerable`1 source) 
   at DataAccess.JeepDataAccess.GetJeepByIDShowingException(Int32 id) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsDataAccessJeepDataAccess.cs:line 58 
   at Client.Program.TestExceptionGetWithEntLib5Only() 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 58 
   at Client.Program.Main(String[] args) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 22

With my extensions, you can expect a StoredProcedureException that includes the text of the full stored procedure being executed at the time. This has saved me countless times as my log table stores the full stack trace and I can reproduce exactly what happened without guessing. The InnerException of the StoredProcedureException will be the same SqlException seen above. The customized stack trace will look like this:

[Stored procedure executed: ExceptionStoredProcedure @RETURN_VALUE=-6, @JeepID=1]
   at Soalutions.Utilities.Data.DatabaseExtensions.CreateObject[T](Database db, DbCommand cmd, GenerateObjectFromReader`1 gofr) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsEntLibExtensionsDatabaseExtensions.cs:line 49
   at DataAccess.JeepDataAccess.GetJeepByIDShowingExceptionWithExtensions(Int32 id) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsDataAccessJeepDataAccess.cs:line 65
   at Client.Program.TestExceptionGetWithExtensions() 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 66
   at Client.Program.Main(String[] args) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 23

So that’s really it. There is some other hidden goodness in there, but it’s not really worth talking about in this post.

Download sample solution: EntLibExtensions.zip – 140 KB (143,360 bytes)

Dec 072010
 

In a previous post about my extensions for Enterprise Library pre-version 5, There was quite a bit of customized logic to create custom entities from a result set. Enterprise Library 5 now takes care of almost all of my customizations with the advent of accessors, row mappers, result set mappers, and parameter mappers. In this post I’ll show a few different ways to use out-of-the-box Enterprise Library 5 features to access data. In Part 2, I’ll also show a few of my own extensions that simply extend Enterprise Library and reduce repetitive code in my data access layer code.

Out-of-the-box Features

The most simplistic scenario exists when your database queries bring back results with column names exactly matching the property names. This is by far the easiest code to write with Enterprise Library, and requires far less code than with all previous versions. Here is a sample showing the default mapping of input parameters and result set columns/values using the new database extension method ExecuteSprocAccessor. You simply pass in the stored procedure name and the params, returning an IEnumerable of your custom entity (in this case, a Jeep object).

public Jeep GetJeepByID(int id)
{
    Database db = DatabaseFactory.CreateDatabase();
    IEnumerable<Jeep> jeeps = db.ExecuteSprocAccessor<Jeep>("GetJeepByID", id);
    return jeeps.First();
}

You can only use this method if all public properties of the custom entity can be mapped to a result set column/value. If any public property values cannot be mapped, you will receive a System.InvalidOperationException stating that the column was not found on the IDataRecord being evaluated. If your parameter or result set mapping becomes more complicated, you can specify a parameter mapper, row mapper, result set mapper, or a combination thereof to customize how your procedure is called, and how the results are interpreted. Here is an example of a custom parameter mapper and row mapper used to replicate the default mapping performed in the first example:

internal class JeepParameterMapper : IParameterMapper
{
    public void AssignParameters(DbCommand command, object[] parameterValues)
    {
        DbParameter parameter = command.CreateParameter();
        parameter.ParameterName = "@JeepID";
        parameter.Value = parameterValues[0];
        command.Parameters.Add(parameter);
    }
}

internal class JeepRowMapper : IRowMapper<Jeep>
{
    public Jeep MapRow(System.Data.IDataRecord row)
    {
        return new Jeep()
        {
            ID = row.GetInt32(0),
            Name = row.GetString(1),
            Description = row.GetString(2),
            Status = row.GetBoolean(3)
        };
    }
}

Below you will see the same task being performed in the first example, but this time with our custom mappers.

public Jeep GetJeepByIDWithMappers(int id)
{
    IParameterMapper jeepParameterMapper = new JeepParameterMapper();
    IRowMapper<Jeep> jeepRowMapper = new JeepRowMapper();

    Database db = DatabaseFactory.CreateDatabase();
    IEnumerable<Jeep> jeeps = db.ExecuteSprocAccessor<Jeep>("GetJeepByID", jeepParameterMapper, jeepRowMapper, id);
    return jeeps.First();
}

ResultSetMappers can be used to map more complex result sets to custom entities with deeper object graphs. Consider a stored procedure that returns multiple result sets similar to that seen in the following image. The first result set contains the custom entity details, and the second result set is some collection of child objects. In this case, we see an article with a child collection of article images.

article_resultset

You would have a hard time building up your custom entity without using an IDataReader and iterating through the result sets with .NextResult. ResultSetMappers allow you to code for this scenario. Below we’ll create a custom result set mapper for articles that will map all of the relevant result sets to the Article object.

internal class ArticleResultSetMapper : IResultSetMapper<Article>
{
    public IEnumerable<Article> MapSet(System.Data.IDataReader reader)
    {
        Dictionary<int, Article> articles = new Dictionary<int, Article>();

        Article article;
        ArticleImage articleImage;
        while (reader.Read())
        {
            article = new Article
            {
                ID = reader.GetInt32(0),
                Title = reader.GetString(1),
                Description = reader.GetString(2),
                Images = new Collection<ArticleImage>()
            };
            articles.Add(article.ID, article);
        }
        if (reader.NextResult())
        {
            while (reader.Read())
            {
                int articleID = reader.GetInt32(0);
                if (articles.ContainsKey(articleID))
                {
                    articleImage = new ArticleImage
                    {
                        DisplayOrder = reader.GetInt32(1),
                        Url = reader.GetString(2),
                        Caption = reader.GetString(3)
                    };
                    articles[articleID].Images.Add(articleImage);
                }
            }
        }

        return articles.Select(a => a.Value);
    }
}

Below you will see the code used to create a new IEnumerable<Article> using our ArticleResultSetMapper:

public Article GetArticleByID(int id)
{
    ArticleResultSetMapper articleResultSetMapper = new ArticleResultSetMapper();

    Database db = DatabaseFactory.CreateDatabase();
    IEnumerable<Article> articles = db.ExecuteSprocAccessor<Article>("GetArticleByID", articleResultSetMapper, id);
    return articles.First();
}

As you can probably tell, Enterprise Library 5 gives you more power and control over the mapping and generation of your custom entities. The previous version of my Enterprise Library extensions focused primarily on performing just the types of mappings that are now built into the product. After seeing just a few examples, you should be ready to jump into Enterprise Library 5 Data Access head first. In the next post, we’ll walk through usage scenarios for a few of my Enterprise Library extensions that makes these routine tasks easier to read, maintain, and train.

Jul 272010
 

In .NET 1.1, I tried the original MS Data Access Application Block’s SqlHelper (you can still download it here). It was great for most of the common uses, but was lacking in some areas. The consuming code looked sloppy and encouraged blind faith that database objects never changed. It also didn’t support transactions as I would have liked, and didn’t support my obsession with custom entities. I started out writing an extension library that wrapped SqlHelper, but that felt very wrong wrapping the ADO.NET wrapper (SqlHelper). I ended up writing my own version of SqlHelper called SqlHelper (nice name, eh?). You see, at this time I was getting over a bad relationship with a series of ORM products that had a negative effect on my productivity. I decided to revolt with good ol? fashion data access methods that have never let us down.

The only thing worse than my ORM experience was the disgusting over-use of DataSet and DataTable. For my dollar, DataReader is where it’s at. I agree that using the reader is slightly more dangerous in the hands of an inexperienced or inattentive developer (did you know you have to close the reader when you’re done with it?). Nothing can compare with the speed and flexibility of the reader, which is why DataSet and DataAdapter use it at their core. If you are working with custom entities, instead of DataSets and DataTables, you would be crazy to not use the DataReader.

My SqlHelper worked in conjunction with my DataAccessLayer class that defined a few delegates that made reader-to-object-mapping a simple task.  Once the mapping methods were written to be used with the delegates, which returned object or System.Collections.CollectionBase because we did not yet have generics (can you imagine?), you simply called the SqlHelper to do all of the hard work. SqlHelper did not implement all of the craziness that the original version contained. It was a short 450 lines of code that did nothing but access data in a safe and reliable way. In the example below, we have the GenerateDocumentFromReader method that is used by the GenerateObjectFromReader delegate. When SqlHelper.ExecuteReaderCmd is called, the delegate is passed in to map the reader results to my object? in this case a Document.

// Object generation method
private static object GenerateDocumentFromReader(IDataReader returnData)
{
     Document document = new Document();
     if (returnData.Read())
     {
         document = new Document(
             (int)returnData["DocumentId"],
             (byte[])returnData["DocumentBinary"],
             returnData["FileName"].ToString(),
             returnData["Description"].ToString(),
             returnData["ContentType"].ToString(),
             (int)returnData["FileSize"],
             returnData["MD5Sum"].ToString(),
             (bool) returnData["EnabledInd"],
             (int)returnData["CreatorEmpId"],
             Convert.ToDateTime(returnData["CreateDt"]),
             (int)returnData["LastUpdateEmpId"],
             Convert.ToDateTime(returnData["LastUpdateDt"]));
     }     return document;
}
public static Document GetDocumentByDocumentId(int documentId)
{
     SqlCommand sqlCmd = new SqlCommand();
     SqlHelper.SetCommandArguments(sqlCmd, CommandType.StoredProcedure, "usp_Document_GetDocumentByDocumentId");
     SqlHelper.AddParameterToSqlCommand(sqlCmd, "@DocumentId", SqlDbType.Int, 0, ParameterDirection.Input, documentId);
     DataAccessLayer.GenerateObjectFromReader gofr = new DataAccessLayer.GenerateObjectFromReader(GenerateDocumentFromReader);
     Document document = SqlHelper.ExecuteReaderCmd(sqlCmd, gofr) as Document;
     return document;
}

This worked wonderfully for years. After converting, I couldn’t imagine a project that used ORM, DataSets, or DataTables again. I’ve been on many 1.1 projects since writing my SqlHelper in 2004, and I have successfully converted them all. In early 2006, MS graced us with .NET 2.0. Generics, System.Transactions, and partial classes changed my life. In my first few exposures to generics, like Vinay “the Generic Guy” Ahuja’s 2005 Jax Code Camp presentation and Juval “My Hero” Lowy’s MSDN article “An Introduction to Generics”, I listened/read and pondered the millions of uses of generics. I adapted my SqlHelper heavily to use these new technologies and morphed it into something else that closely represented the newest version of the DAAB, Enterprise Library 3.

By this point, I wanted to convert to Enterprise Library. It was far better than the simple SqlHelper. It had better transaction support, though I don’t know if that included System.Transactions. I could have put my object generation extensions on top of it and it would have worked well for years. On home projects I had already converted to use EntLib. At work I was not so lucky. The deep stack trace when something went wrong scared everyone, and that is still a fear for those starting out in EntLib today. To ease the fears, I just created my replacement to SqlHelper the Database class.

I used a lot of the same naming conventions as Enterprise Library. In fact, much of the consuming code was nearly identical (except for the fact that it did not implement the provider pattern and worked only with SQL Server). This was in anticipation of a quick adoption of Enterprise Library 3 in the workplace. Kind of a “see? not so bad” move on my part. Just like EntLib, you created a Database class using the DatabaseFactory that used your default connection string key. Commands and parameters were created and added with methods off of the Database class. Aside from the SqlCommand/DbCommand, everything looked and felt the same, but came in a small file with only 490 lines of code instead of 5 or more projects with 490 files. Using it felt the same, too. Only my object/collection generation extensions looked different from the standard reader, scalar, dataset routines. Below is the same code from above using the Database class and related classes to create a Document from a reader.

// Object generation method
private static Document GenerateDocumentFromReader(IDataReader returnData)
{
     Document document = new Document();
     if (returnData.Read())
     {
         document = new Document(
             GetIntFromReader(returnData, "DocumentId"),
             GetIntFromReader(returnData, "DocumentTypeId"),
             GetStringFromReader(returnData, "DocumentTypeName"),
             GetByteArrayFromReader(returnData, "DocumentBinary"),
             GetStringFromReader(returnData, "FileName"),
             GetStringFromReader(returnData, "Description"),
             GetStringFromReader(returnData, "ContentType"),
             GetIntFromReader(returnData, "FileSize"),
             GetStringFromReader(returnData, "MD5Sum"),
             GetStringFromReader(returnData, "CreatorEmpID"),
             GetDateTimeFromReader(returnData, "CreateDt"),
             GetStringFromReader(returnData, "LastUpdateEmpID"),
             GetDateTimeFromReader(returnData, "LastUpdateDt"));
     }
     return document;
}
public static Document GetDocumentByDocumentId(int documentId)
{
     Database db = DatabaseFactory.CreateDatabase(AppSettings.ConnectionStringKey);
     SqlCommand sqlCmd = db.GetStoredProcCommand("usp_Document_GetDocumentByDocumentId");
     db.AddInParameter(sqlCmd, "DocumentId", SqlDbType.Int, documentId);
     GenerateObjectFromReader<Document> gofr = new GenerateObjectFromReader<Document>(GenerateDocumentFromReader);
     Document document = CreateObjectFromDatabase<Document>(db, sqlCmd, gofr);
     return document;
}

This, too, worked great for years. Other than a brief period in 2007 when I tried to wrap all of my data access code with WCF services, .NET 3.0 came and went with no changes to my data access methodology. In late 2007, I had lost all love of my SqlHelper and my Database/DataAccessLayer classes. With .NET 3.5 and Enterprise Library 4.0, I no longer felt the need to roll my own. .NET now had extension methods for me to extend Enterprise Library however I pleased. Enterprise Library supported System.Transactions making its use a dream if behind a WCF service that allowed transaction flow. With a succinct 190 lines of extension code, I had it made in the shade with Enterprise Library 4.0. In fact, I haven’t used anything since.

The consuming code was almost exactly the same. You’ll notice the SqlCommand has changed to DbCommand. The SqlDbType has changed to DbType. Other than that, it feels and works the same.

// Object generation method
private static Document GenerateDocumentFromReader(IDataReader returnData)
{
     Document document = new Document();
     if (returnData.Read())
     {
         document = new Document(
             returnData.GetInt32("DocumentId"),
             returnData.GetInt32("DocumentTypeId"),
             returnData.GetString("DocumentTypeName"),
             returnData.GetByteArray("DocumentBinary"),
             returnData.GetString("FileName"),
             returnData.GetString("Description"),
             returnData.GetString("ContentType"),
             returnData.GetInt32("FileSize"),
             returnData.GetString("MD5Sum"),
             returnData.GetString("CreatorEmpID"),
             returnData.GetDateTime("CreateDt"),
             returnData.GetString("LastUpdateEmpID"),
             returnData.GetDateTime("LastUpdateDt"));
     }
     return document;
}
public static Document GetDocumentByDocumentID(int documentId)
{
     Database db = DatabaseFactory.CreateDatabase();
     DbCommand cmd = db.GetStoredProcCommand("usp_Document_GetDocumentByDocumentId");
     db.AddInParameter(cmd, "DocumentID", DbType.Int32, documentId);
     GenerateObjectFromReader<Document> gofr = new GenerateObjectFromReader<Document>(GenerateDocumentFromReader);
     Document document = db.CreateObject<Document>(cmd, gofr);
     return document;
}

With a full suite of unit test projects available for download with the Enterprise Library source files, the fear should be abated for the remaining holdouts. Getting started is as easy as including two DLL references, and adding 5 lines of config. You can’t beat that!

I downloaded Enterprise Library 5 last week. I’ve been making use of new features such as result set mapping (eliminating the need for my object generation extensions), parameter mapping, and accessors that bring them all together. There’s a bunch of inversion of control features in place as well. I think I’ll be quite comfortable in my new EntLib5 home.

Mar 162010
 

In a previous post, I discussed solutions to the dreaded “The flowed transaction could not be unmarshaled” error commonly experienced when using MSDTC transactions with WCF, SQL, TxF, etc. I have once again experienced the un-trusted domain scenario, and can now report with certainty that adding hosts file entries on both machines will correct the problem. Testing this solution with DTCPing.exe between the two machines proves that making only the hosts file change acquaints the client and server and allows distributed transactions to occur.

You will find many blog and forum post non-solutions. Adding the hosts file entry or the equivalent domain redirects are the only solutions when working with two machines in disparate, un-trusted domains. Some of the non-solutions you’ll find go so far as to say to change your SQL connection string to prevent current (ambient) transaction enlistment. Not quite a complete solution as your first rollback unit test will fail.

Dec 052008
 

Unless you are working on a extremely simple or read-only application, transactions are a must. Using the System.Transactions namespace is the easiest and most efficient way to maintain system consistency when dealing with multiple calls or multiple resources. Although System.Transactions arrived in .NET in the 2005 product, it is still a relatively unknown part of the framework. System.Transactions was designed by the Indigo team in preparation for WCF. It is not compatible with Win98 or WinME, but most people are incompatible with Win98 and WinME so it works out just fine.

Before System.Transactions, we only had access to System.Data.SqlClient.SqlTransaction or a true SQL transaction using BEGIN/ROLLBACK/COMMIT TRAN. Using SQL transactions, you are stuck with only being able to update DB records as part of your transaction. If you wanted to change a cached value in your app in addition to the SQL updates in the same transaction then you would be out of luck. This also required a lot of transaction code in your stored procedures, writing stored procedures that can be called independently or part of transaction made for very messy stored procedures and often led to multiple stored procedures that served the same purpose.

Using the SqlTransaction class was also messy. The most important restriction is that you need to have all database calls inside the same SqlConnection. This does not work well for a well-designed N-tier application. The other problem is that you need to handle your own non-DB transaction logic inside the same SqlTransaction and conditionally commit/rollback as necessary. This all tends to lead to several try-catch statements within one SqlTransaction. Handling the plumbing to manually add each SqlCommand to the transaction gets old quickly too.

Using SqlTransaction

   27 string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

   28 using (SqlConnection con = new SqlConnection(connectionString))

   29 {

   30     SqlTransaction tran = null;

   31     try

   32     {

   33         con.Open();

   34         tran = con.BeginTransaction();

   35         using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   36         {

   37             cmd.Transaction = tran;

   38             cmd.CommandType = System.Data.CommandType.StoredProcedure;

   39             cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

   40             cmd.Parameters.AddWithValue(“UserID”, 5150);

   41             cmd.ExecuteNonQuery();

   42         }

   43 

   44         using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   45         {

   46             cmd.Transaction = tran;

   47             cmd.CommandType = System.Data.CommandType.StoredProcedure;

   48             cmd.Parameters.AddWithValue(“Message”, “Testing 2”);

   49             cmd.Parameters.AddWithValue(“UserID”, 5150);

   50             cmd.ExecuteNonQuery();

   51         }

   52 

   53         tran.Commit();

   54     }

   55     catch

   56     {

   57         if (tran != null) tran.Rollback();

   58     }

   59     finally

   60     {

   61         con.Close();

   62     }

   63 }

System.Transactions liberated us from the mundane SqlClient code and repetitive try-catches. simply wrapping your old ADO.NET with a using (TransactionScope) { } is all you need to do. You will typically add a transactionScope.Complete() statement as the last line in the TransactionScope using block is really all you need. Any exception thrown before this point will break out of scope, implicitly aborting the transation. Much better.

System.Transactions uses the LTM (Lightweight Transaction Manager) when dealing with single resources or machines. The transaction is automatically promoted to MSDTC (Microsoft Distributed Transaction Coordinator) when another resource is enlisted in a transaction. A lot of people struggle with MSDTC because it is difficult to setup, requires special firewall considerations, and doesn’t really work well for smart client applications since you have to install MSDTC on every client machine.

I’ll show one transaction performed three different ways and show what happens with the LTM and MSDTC for each of them. I will also demonstrate an excellent reason to migrate to Enterprise Library.

1) Executing two ADO.NET SqlCommands in different SqlConnections

  122 using (TransactionScope scope = new TransactionScope())

  123 {

  124     string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

  125     using (SqlConnection con = new SqlConnection(connectionString))

  126     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

  127     {

  128         cmd.CommandType = System.Data.CommandType.StoredProcedure;

  129         cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

  130         cmd.Parameters.AddWithValue(“UserID”, 5150);

  131         try

  132         {

  133             con.Open();

  134             cmd.ExecuteNonQuery();

  135         }

  136         finally

  137         {

  138             con.Close();

  139         }

  140     }

  141 

  142     Console.WriteLine(“Local Transaction ID: {0}”,

  143         Transaction.Current.TransactionInformation.LocalIdentifier);

  144     Console.WriteLine(“Distributed Transaction ID: {0}”,

  145         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());

  146 

  147     using (SqlConnection con = new SqlConnection(connectionString))

  148     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

  149     {

  150         cmd.CommandType = System.Data.CommandType.StoredProcedure;

  151         cmd.Parameters.AddWithValue(“Message”, “Testing 2”);

  152         cmd.Parameters.AddWithValue(“UserID”, 5150);

  153         try

  154         {

  155             con.Open();

  156             cmd.ExecuteNonQuery();

  157         }

  158         finally

  159         {

  160             con.Close();

  161         }

  162     }

  163 

  164     Console.WriteLine(“Local Transaction ID: {0}”,

  165         Transaction.Current.TransactionInformation.LocalIdentifier);

  166     Console.WriteLine(“Distributed Transaction ID: {0}”,

  167         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());

  168 

  169     scope.Complete();

  170 }

This writes the following to the command line:

Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:2
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000
Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:2
Distributed Transaction ID: 1fad8108-ddae-496a-a7da-ce92df175e40

You’ll notice that the first command creates a transaction using LTM as indicated by the Local Transaction ID. After the second command is executed, the transaction is promoted to DTC as indicated by the Distributed Transaction ID. This is expected because there are two distinct SqlConnections. Even though the connection string is the same, TransactionScope treats these ADO.NET objects as unique resources.

This has additional implications when connection pooling comes into play. After I close the first connection, it is returned to the pool and is available for use. If this connection is requested for use, it will no longer be available to commit or abort this transaction, and you will see the dreaded MSDTC error “Communication with the underlying transaction manager has failed.”

2) Executing two ADO.NET SqlCommands in the same SqlConnection

   69 string connectionString = ConfigurationManager.ConnectionStrings[“Testing”].ConnectionString;

   70 using (TransactionScope scope = new TransactionScope())

   71 using (SqlConnection con = new SqlConnection(connectionString))

   72 {

   73     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   74     {

   75         cmd.CommandType = System.Data.CommandType.StoredProcedure;

   76         cmd.Parameters.AddWithValue(“Message”, “Testing 1”);

   77         cmd.Parameters.AddWithValue(“UserID”, 5150);

   78         try

   79         {

   80             con.Open();

   81             cmd.ExecuteNonQuery();

   82         }

   83         finally

   84         {

   85             con.Close();

   86         }

   87     }

   88 

   89     Console.WriteLine(“Local Transaction ID: {0}”,

   90         Transaction.Current.TransactionInformation.LocalIdentifier);

   91     Console.WriteLine(“Distributed Transaction ID: {0}”,

   92         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());

   93 

   94     using (SqlCommand cmd = new SqlCommand(“usp_ErrorLog_Insert”, con))

   95     {

   96         cmd.CommandType = System.Data.CommandType.StoredProcedure;

   97         cmd.Parameters.AddWithValue(“Message”, “Testing 2”);

   98         cmd.Parameters.AddWithValue(“UserID”, 5150);

   99         try

  100         {

  101             con.Open();

  102             cmd.ExecuteNonQuery();

  103         }

  104         finally

  105         {

  106             con.Close();

  107         }

  108     }

  109 

  110     Console.WriteLine(“Local Transaction ID: {0}”,

  111         Transaction.Current.TransactionInformation.LocalIdentifier);

  112     Console.WriteLine(“Distributed Transaction ID: {0}”,

  113         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());

  114 

  115     scope.Complete();

  116 }

This writes the following to the command line:

Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:1
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000
Local Transaction ID: e90f47f4-df80-496b-a9c0-0c45b2f452c4:1
Distributed Transaction ID: becac9c9-e15f-4370-9f73-7f369665bed7

This is not expected because both commands are part of the same connection. Of course I am closing the connection to simulate an N-tier app where the data access layer is maintaining it’s own SQL access, opening and closing its connection as it should. If I did not close the connection, you would not see a Distributed Transaction ID after the second command.

3) Executing two Enterprise Library commands

  176 using (TransactionScope scope = new TransactionScope())

  177 {

  178     Database db = DatabaseFactory.CreateDatabase(“Testing”);

  179     DbCommand cmd = db.GetStoredProcCommand(“usp_ErrorLog_Insert”);

  180     db.AddInParameter(cmd, “Message”, System.Data.DbType.String, “Testing 1”);

  181     db.AddInParameter(cmd, “UserID”, System.Data.DbType.Int32, 5150);

  182     db.ExecuteNonQuery(cmd);

  183 

  184     Console.WriteLine(“Local Transaction ID: {0}”,

  185         Transaction.Current.TransactionInformation.LocalIdentifier);

  186     Console.WriteLine(“Distributed Transaction ID: {0}”,

  187         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());

  188 

  189     Database db1 = DatabaseFactory.CreateDatabase(“Testing1”);

  190     DbCommand cmd1 = db.GetStoredProcCommand(“usp_ErrorLog_Insert”);

  191     db1.AddInParameter(cmd1, “Message”, System.Data.DbType.String, “Testing 2”);

  192     db1.AddInParameter(cmd1, “UserID”, System.Data.DbType.Int32, 5150);

  193     db1.ExecuteNonQuery(cmd1);

  194 

  195     Console.WriteLine(“Local Transaction ID: {0}”,

  196         Transaction.Current.TransactionInformation.LocalIdentifier);

  197     Console.WriteLine(“Distributed Transaction ID: {0}”,

  198         Transaction.Current.TransactionInformation.DistributedIdentifier.ToString());

  199 

  200     scope.Complete();

  201 }

This writes the following to the command line:

Local Transaction ID: 6737b756-2d5b-4eff-902d-15f9ccd5c26f:3
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000
Local Transaction ID: 6737b756-2d5b-4eff-902d-15f9ccd5c26f:3
Distributed Transaction ID: 00000000-0000-0000-0000-000000000000

Whoa! How cool is that? No DTC promotion. Enterprise Library is intelligently deciding to keep the connection open when it is part of the transaction. This will save a lot of wasted time as the promotion to DTC adds a noticeable delay. If I wasn’t using Enterprise Library already, I’d switch now.

Useful links:

Sep 072008
 

I really need to read up on new features when a major release comes out. Just a few weeks ago I learned of a great “new” SQL 2005 function… ROW_NUMBER(). Just in time since SQL 2008 is already out.

For me, this function means a lot less temp tables. I would typically create a temp table with an ID INT IDENTITY(1,1) column to create an DisplayOrder, BatchID, etc. used to group or join on later. Books Online describes the function as “Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.” The syntax is simple, and looks like:

ROW_NUMBER() OVER (ORDER BY ID DESC)

For this example, the data I want to bring back with a DisplayOrder column looks like:

pers_subs_data

Without ROW_NUMBER(), using a table variable with an identity column:

DECLARE @Subs TABLE (DisplayOrder INT IDENTITY(1,1), [Address] VARCHAR(100), Operation VARCHAR(50), [Contract] VARCHAR(50))

INSERT INTO @Subs ([Address], Operation, [Contract])
SELECT [Address]
    , Operation
    , [Contract]
FROM PersistentSubscribers
WHERE Operation = ‘OnEvent2’
ORDER BY ID DESC

SELECT * FROM @Subs

With ROW_NUMBER(), look how beautiful:

SELECT DisplayOrder = ROW_NUMBER() OVER (ORDER BY ID DESC)
    , [Address]
    , Operation
    , [Contract]
FROM PersistentSubscribers
WHERE Operation = ‘OnEvent2’

The results from both methods looks like:

 

row_number_results

Jun 222008
 

A few months after SQL 2005 was released and hit the productions servers, some people started experiencing some odd behavior in their stored procedures. Simple stored procedures that normally return in 0 seconds would take upwards of a minute to return. Even more strange was the fact that the same query, outside of a stored procedure, would still return in 0 seconds.


It never affected me personally… until today. Three years late to the party. It’s funny how much more interested I am in the causes and solutions for this apparent problem when it affects me. “Parameter Sniffing” is the term Microsoft uses to describe the feature that causes this odd behavior. While it appeared as an issue when I encountered it today, I found that the feature is not only well-intentioned but quite useful.


The execution plan is generated and cached the first time your stored procedure is called. When the execution plan is being created, SQL Server reads the input parameters and uses them to optimize the execution plan for those parameters. This is called “parameter sniffing.” If the input parameters used in the first call to the stored procedure are atypical for the overall use of the stored procedure, a less than ideal execution plan will be cached for all subsequent calls.


Simply dropping and recompiling the stored procedure does not seem to affect the cached execution plan. Updating statistics on the tables used in the stored procedure will cause the execution plan to be regenerated on the next call of the stored procedure. However, if the same or similar atypical parameters are used on the first execution of the stored procedure, an equally sub-optimal execution plan will be cached.


You can turn off parameter sniffing. This is accomplished by assigning the input parameter values to local variables inside the stored procedure and then using the local variables within the stored procedure. When the execution plan is created, SQL Server will look at the table statistics to optimize the query for the “average” use. It does this by looking at the tables used in the query and analyzing row counts, etc. to find a reasonable plan that will likely suit a majority of situations.


My stored procedure was bringing back multiple resultsets to be used to create a hierarchical structure in code. It works essentially like the following:


CREATE PROCEDURE [dbo].[usp_Order_GetOrderDetails]
(
   @StartOrderId INT,
   @EndOrderId INT
)
AS
BEGIN

   SELECT *
   FROM Order
   WHERE OrderId BETWEEN @StartOrderId AND @EndOrderId
 
   SELECT *
   FROM OrderLineItem
   WHERE OrderId BETWEEN @StartOrderId AND @EndOrderId
END


I was testing the stored procedure for full day using the same ID for @StartOrderId and @EndOrderId. Since the intended use of this stored procedure is almost always @EndOrderId = @StartOrderId + 1000, this makes a big difference when calculating the estimate number of rows returned. I forced SQL Server to assume that my execution plan should be based on an ID range of 1 instead of 1000. Turning off parameter sniffing lessens these effects.


To turn off parameter sniffing, it would look like this:


CREATE PROCEDURE [dbo].[usp_Order_GetOrderDetails]
(
   @StartOrderId INT,
   @EndOrderId INT
)
AS
BEGIN
   DECLARE @Start INT
   DECLARE @End INT
   SET @Start = @StartOrderId
   SET @End = @EndOrderId
 

   SELECT *
   FROM Order
   WHERE OrderId BETWEEN @Start AND @End
 
   SELECT *
   FROM OrderLineItem
   WHERE OrderId BETWEEN @Start AND @End
END


This immediately improved the performance of my stored procedure. The time to complete reduced from ~2 minutes to ~2 seconds for my typical 1000 ID range (I know 2 seconds is a lot, but these tables have millions and millions of rows). But only one piece of code in the application calls this stored procedure, and 99 out of 100 times it will have a range of 1000 IDs. Why would I want SQL Server to guess how many Orders I will typically bring back when I know the exact number?


I should have the optimal execution plan if I update statistics on Order and OrderLineItem, and then call usp_Order_GetOrderDetails 1, 1000 after I compile this stored procedure. This sounds like a lot of work to me, and I did not notice any performance boost by doing this. I chose to leave parameter sniffing off.


The only drawbacks to turning off parameter sniffing is the weird looking SQL and the inevitable questions during code review about the crazy input parameter to variable mapping. But when you school the doubters on the causes and effects of parameter sniffing, it will put another notch in your guru stick.


From what I have read, this was not a new feature in SQL 2005. I can’t, however, find any mention of it in SQL 2000 books online, and this feature never showed its face in SQL 2000.