Sep 152014
 

I had a great time at Jax Code Impact over the weekend. Many thanks to Bayer and Brandy for putting together a free, enjoyable, and educational Saturday conference. As a first year conference, it was impressive to see more than 300 people registered for six tracks of Microsoft-focused presentations. Kevin Wolf was a big hit with his quadcopters, 3-D printer, and oculus rift. I personally enjoyed two separate talks on Redis by Henry Lee and Steve Danielson.

I talked about the Windows Azure Service Bus offerings, and hit on topics such as Relays, Queues, Topics, WCF bindings, Pub/Sub, AMQP, and of course some Cloud Design Patterns. 

Here are the slides and code demos from my talk:

Code Impact Service Bus Presentation – Slides

Code Impact Service Bus Presentation – Demo Code

Mar 312014
 

I had a great time at Global Windows Azure Bootcamp (GWAB) in Jacksonville, FL. I got to meet a bunch of cool people and discuss Azure topics all day. Free food, Bold Bean coffee, and beer helped to create the perfect geekfest atmosphere. I can’t wait for the next Azure event!

I talked about Windows Azure Data Services, and hit on topics such as Tables, Blobs, Queues, Windows Azure SQL Database, and some Cloud Design Patterns. The links below are the slides and code demos from my talk.

GWAB Azure Storage Presentation – Slides

GWAB Azure Storage Presentation – Demo Code

Feb 162014
 

Before Windows Azure Storage Client Library (SCL) 2.1, any entity that we wanted to put in Azure Table Storage (ATS) had to derive from the TableServiceEntity class. For me that meant maintaining a ATS-specific entity just to get the PartitionKey (PK), RowKey (RK), Timestamp, and ETag. I also had to maintain a DTO or POCO to be used by the rest of the application, and also maintain logic to marshal values between them to facilitate the common CRUD work.

In the RTM announcement for Windows Azure Storage Client Library 2.1, Microsoft announced that they are now exposing the serialization/deserialization logic for any CLR type. This makes it possible for us to store and retrieve entities without needing to maintain two entity types: the DTO and another class that derives from TableEntity. It also makes it possible to store entities in ATS for which you do not own/maintain the code. We still have the same data type restrictions (e.g. subset of OData Protocol Specifications) so that will restrict how many of those “not owned/maintained” classes can exist in ATS.

In the old days of 2013…

Back in my day, we had to use TableServiceEntity. We’d create generic TableServiceDataModel, TableServiceContext, and TableServiceDataSource classes that would get the connection established and serve up table entities as IQueryables. Inserts, Updates, and Deletes were called and then a call to .SaveChanges(). It had an Entity Framework feel to it, which gave a warm fuzzy feeling that we weren’t clueless.

An Azure adapter layer was full of TableServiceDataModel classes and the necessary infrastructure to interact with ATS:

public class ProductCommentModel : TableServiceDataModel
{
	public const string PartitionKeyName = "ProductComment";

	public ProductCommentModel()
		: base(PartitionKeyName, Guid.NewGuid().ToString())
	{ }

	public string ProductId { get; set; }
	public string Commenter { get; set; }
	public string Comment { get; set; }
}

public class TableServiceDataModel : TableServiceEntity
{
	public TableServiceDataModel(string partitionKey, string rowKey)
		: base(partitionKey, rowKey)
	{ }
}

public class TableServiceContext<TModel> : TableServiceContext where TModel : TableServiceEntity
{
	public TableServiceContext(string tableName, string baseAddress, StorageCredentials credentials)
		: base(baseAddress, credentials)
	{
		TableName = tableName;
	}

	public string TableName { get; set; }

	public IQueryable<TModel> Table
	{
		get
		{
			return this.CreateQuery<TModel>(TableName);
		}
	}
}

public class TableServiceDataSource<TModel> where TModel : TableServiceEntity
{
	private string m_TableName;
	private TableServiceContext<TModel> m_ServiceContext;
	private CloudStorageAccount m_StorageAccount;

	protected CloudStorageAccount StorageAccount
	{
		get
		{
			if (m_StorageAccount == null)
			{
				m_StorageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
			}
			return m_StorageAccount;
		}
	}

	protected TableServiceContext<TModel> ServiceContext
	{
		get
		{
			if (m_ServiceContext == null)
			{
				m_ServiceContext = new TableServiceContext<TModel>(m_TableName, StorageAccount.TableEndpoint.ToString(), StorageAccount.Credentials);
			}
			return m_ServiceContext;
		}
	}

	public TableServiceDataSource(string tableName)
	{
		m_TableName = tableName;
		StorageAccount.CreateCloudTableClient().CreateTableIfNotExist(m_TableName);
	}

	public IEnumerable<TModel> Select()
	{
		var results = from c in ServiceContext.Table
						select c;

		var query = results.AsTableServiceQuery<TModel>();
		var queryResults = query.Execute();

		return queryResults;
	}

	public IEnumerable<TModel> Select(Expression<Func<TModel, bool>> predicate)
	{
		CloudTableQuery<TModel> query = ServiceContext
			.CreateQuery<TModel>(ServiceContext.TableName)
			.Where(predicate)
			.AsTableServiceQuery<TModel>();

		var queryResults = query.Execute();
		return queryResults;
	}

	public void Delete(TModel itemToDelete)
	{
		ServiceContext.DeleteObject(itemToDelete);
		ServiceContext.SaveChanges();
	}

	public void Update(TModel itemToUpdate)
	{
		ServiceContext.UpdateObject(itemToUpdate);
		ServiceContext.SaveChanges();
	}

	public void Update(TModel itemToUpdate, SaveChangesOptions saveOptions)
	{
		ServiceContext.UpdateObject(itemToUpdate);
		ServiceContext.SaveChanges(saveOptions);
	}

	public void Insert(TModel newItem)
	{
		ServiceContext.AddObject(m_TableName, newItem);
		ServiceContext.SaveChanges();
	}

	public void InsertToBatch(TModel newitem)
	{
		ServiceContext.AddObject(m_TableName, newitem);
	}

	public void SaveBatch()
	{
		ServiceContext.SaveChangesWithRetries(SaveChangesOptions.Batch);
	}
}

Data Access Layer ended up looking much cleaner than the Azure Documentation… something like this:

public void AddComment(ProductCommentModel model)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	dataSource.Insert(model);
}

public IEnumerable<ProductCommentModel> GetComments(string productId)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	var comments = dataSource.Select().Where(p => p.PartitionKey == ProductCommentModel.PartitionKeyName && p.ProductId == productId).OrderByDescending(comment => comment.Timestamp);
		return comments;
}

public void DeleteComment(string commentid)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	var comment = dataSource.Select().Where(p => p.PartitionKey == ProductCommentModel.PartitionKeyName && p.RowKey == commentid);
	if (comment.Count() > 0)
	{
		dataSource.Delete(comment.First());
	}
}

With that adapter layer we thought we had it made. The data access layer looks cleaner than most SQL implementations. Still, we had too much Azure code and terminology too far away from the Azure calls. It was a small price to pay I suppose.

Enter the EntityAdapter

The RTM announcement showed an example of what is possible with access to the serialization/deserialization logic. Their sample showed a class named EntityAdapter. Rory Primrose has made some great improvements on EntityAdapter. I took this same class and made just a few modifications to support my use cases. Primarily, the examples had no support for ETags which are critically important in some scenarios. Here is my current version of EntityAdapter:

internal abstract class EntityAdapter<T> : ITableEntity where T : class, new()
{
    private string m_PartitionKey;

    private string m_RowKey;

    private string m_ETag;

    private T m_Value;

    protected EntityAdapter()
        : this(new T())
    { }

    protected EntityAdapter(T value)
    {
        if (value == null)
        {
            throw new ArgumentNullException("value", "EntityAdapter cannot be constructed from a null value");
        }

        m_Value = value;
    }

    public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
    {
        m_Value = new T();

        TableEntity.ReadUserObject(m_Value, properties, operationContext);

        ReadValues(properties, operationContext);
    }

    public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
    {
        var properties = TableEntity.WriteUserObject(Value, operationContext);

        WriteValues(properties, operationContext);

        return properties;
    }

    protected abstract string BuildPartitionKey();

    protected abstract string BuildRowKey();

    protected virtual void ReadValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    { }

    protected virtual void WriteValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    { }

    protected virtual void SetETagValue(string eTag)
    { }

    public string ETag
    {
        get
        {
            return this.m_ETag;
        }
        set
        {
            this.m_ETag = value;
            SetETagValue(value);
        }
    }

    public string PartitionKey
    {
        get
        {
            if (m_PartitionKey == null)
            {
                m_PartitionKey = BuildPartitionKey();
            }

            return m_PartitionKey;
        }
        set
        {
            m_PartitionKey = value;
        }
    }

    public string RowKey
    {
        get
        {
            if (m_RowKey == null)
            {
                m_RowKey = BuildRowKey();
            }
            return m_RowKey;
        }
        set
        {
            m_RowKey = value;
        }
    }

    public DateTimeOffset Timestamp { get; set; }

    public T Value
    {
        get
        {
            return m_Value;
        }
    }
}

To use EntityAdapter with a DTO/POCO (e.g. Racer), you write an adapter (e.g. RacerAdapter):

public class Racer
{
    [Display(Name = "Driver")]
    public string Name { get; set; }

    [Display(Name = "Car Number")]
    public string CarNumber { get; set; }

    [Display(Name = "Race")]
    public string RaceName { get; set; }

    public DateTime? DateOfBirth { get; set; }

    [Display(Name = "Last Win")]
    public string LastWin { get; set; }

    public string ETag { get; set; }

    public bool HasWon
    {
        get
        {
            return !String.IsNullOrEmpty(this.LastWin);
        }
    }

    public List<string> Validate()
    {
        List<string> validationErrors = new List<string>();

        //TODO: Write validation logic

        return validationErrors;
    }
}

internal class RacerAdapter : EntityAdapter<Racer>
{
    public RacerAdapter()
    { }

    public RacerAdapter(Racer racer)
        : base(racer)
    {
        this.ETag = racer.ETag;
    }

    protected override string BuildPartitionKey()
    {
        return Value.RaceName;
    }

    protected override string BuildRowKey()
    {
        return Value.CarNumber;
    }

    protected override void ReadValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    {

        this.Value.RaceName = this.PartitionKey;
        this.Value.CarNumber = this.RowKey;
    }

    protected override void WriteValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    {
        properties.Remove("CarNumber");
        properties.Remove("RaceName");
    }

    protected override void SetETagValue(string eTag)
    {
        this.Value.ETag = eTag;
    }
}

Now we have everything we need to make our data access layer more simplified and domain-focused instead of table-entity-focused.

// Using TableEntity-derived class requires front-facing layers to deal with partition/row keys instead of domain-specific identifiers
public void AddRacer(RacerEntity racer)
{
    CloudTable table = GetRacerTable();

    TableOperation upsertOperation = TableOperation.InsertOrReplace(racer);
    table.Execute(upsertOperation);
}

// Using a DTO with the EntityAdapter
public void AddRacer(Racer racer)
{
    CloudTable table = GetRacerTable();

    var adapter = new RacerAdapter(racer);
    var upsertOperation = TableOperation.InsertOrReplace(adapter);

    table.Execute(upsertOperation);
}

With or without EntityAdapter, SCL 2.1 gave us TableEntity, TableOperation, etc. that really simplify our code. EntityAdapter is icing on the cake, and really helps to simplify Azure-hosted web APIs.

Dec 272013
 

I learn something every day whether I like it or not. Today’s lesson:

SelectList thinks she’s smarter than you.

Observations

I was working in an MVC4 app, making some forms light up with some custom HtmlHelpers. Everything is dandy until a drop-down doesn’t re-populate with the previously selected value after a POST or a fresh GET. That’s funky. The right value is in the database. So I looked at the .cshtml. I had two drop-downs next to each other. I changed the custom HtmlHelpers to PODDLFs (plain old DropDownListFor) and it does the same thing. The one for Suffix “binds” the previously selected value as I’d expect, but the one for Title appears to do nothing.

@Html.DropDownListFor(model => model.Title, Model.SelectLists.PreferredTitles)
@Html.DropDownListFor(model => model.Suffix, Model.SelectLists.Suffixes)

So to be safe, let’s print out the value of Title as a string literal.

Testing: @Model.Title

Yep, works fine. I see “Mr.” just as I’d expect. So I searched for every instance of “.Title” to see if this is happening somewhere else in the app, but there are no other uses in a .cshtml file. What I did find was many instances of @ViewBag.Title being used to set the window and page titles throughout the app. I renamed “Title” to “Prefix” on the model and the fog clears a little. There’s something going on with ViewBag’s Title taking precedence over my model’s Title. To be sure, I undid the renaming operation and changed the impacted view’s ViewBag.Title to be “Mr.”, and then “Dr.”. Regardless of the current value of Model.Title, the value of ViewBag.Title is always used to set the selected value.

Analysis

You can build your SelectList and set whatever “selectedValue” you want. DropDownListFor calls SelectInternal (excerpt below) to build the MvcHtmlString. SelectInternal is responsible for binding the appropriate value for the model/property used in the expression of DropDownListFor.  When the value is not found with GetModelStateValue, ViewData.Eval is used to get the “selected value”. Deep in the internals of ViewData.Eval, ViewBag takes precedence over your model.

object defaultValue = allowMultiple ? htmlHelper.GetModelStateValue(fullHtmlFieldName, typeof(string[])) : htmlHelper.GetModelStateValue(fullHtmlFieldName, typeof(string));
if ((!flag && (defaultValue == null)) && !string.IsNullOrEmpty(name))
{
    defaultValue = htmlHelper.ViewData.Eval(name);
}
if (defaultValue != null)
{
    selectList = GetSelectListWithDefaultValue(selectList, defaultValue, allowMultiple);
}

So what actually happened was SelectInternal took my page title and tried to make it the selected value in the drop-down list. Knowing why it does this doesn’t make me any happier. I’d really prefer that DropDownListFor use my model’s value like I told it to. Alas, I didn’t write this code and it’s pretty dumb of me to not recognize the clear naming conflict. So I’ll accept this and move on.

Corrective Action

Clearly the best solution is to use much more descriptive names that don’t clobber each other. Changing ViewBag.Title to be ViewBag.PageTitle is the path of least resistance. Simply using “Title” on the model wasn’t very good either. It would be better as “Salutation”, “NamePrefix” or “PreferredTitle” anyway. These types of hidden naming conflicts are sure to stump some people. Remembering this little nugget of the SelectList internals will keep naming/conflicts on my mind for some time.

Dec 072010
 

In a previous post about my extensions for Enterprise Library pre-version 5, There was quite a bit of customized logic to create custom entities from a result set. Enterprise Library 5 now takes care of almost all of my customizations with the advent of accessors, row mappers, result set mappers, and parameter mappers. In this post I’ll show a few different ways to use out-of-the-box Enterprise Library 5 features to access data. In Part 2, I’ll also show a few of my own extensions that simply extend Enterprise Library and reduce repetitive code in my data access layer code.

Out-of-the-box Features

The most simplistic scenario exists when your database queries bring back results with column names exactly matching the property names. This is by far the easiest code to write with Enterprise Library, and requires far less code than with all previous versions. Here is a sample showing the default mapping of input parameters and result set columns/values using the new database extension method ExecuteSprocAccessor. You simply pass in the stored procedure name and the params, returning an IEnumerable of your custom entity (in this case, a Jeep object).

public Jeep GetJeepByID(int id)
{
    Database db = DatabaseFactory.CreateDatabase();
    IEnumerable<Jeep> jeeps = db.ExecuteSprocAccessor<Jeep>("GetJeepByID", id);
    return jeeps.First();
}

You can only use this method if all public properties of the custom entity can be mapped to a result set column/value. If any public property values cannot be mapped, you will receive a System.InvalidOperationException stating that the column was not found on the IDataRecord being evaluated. If your parameter or result set mapping becomes more complicated, you can specify a parameter mapper, row mapper, result set mapper, or a combination thereof to customize how your procedure is called, and how the results are interpreted. Here is an example of a custom parameter mapper and row mapper used to replicate the default mapping performed in the first example:

internal class JeepParameterMapper : IParameterMapper
{
    public void AssignParameters(DbCommand command, object[] parameterValues)
    {
        DbParameter parameter = command.CreateParameter();
        parameter.ParameterName = "@JeepID";
        parameter.Value = parameterValues[0];
        command.Parameters.Add(parameter);
    }
}

internal class JeepRowMapper : IRowMapper<Jeep>
{
    public Jeep MapRow(System.Data.IDataRecord row)
    {
        return new Jeep()
        {
            ID = row.GetInt32(0),
            Name = row.GetString(1),
            Description = row.GetString(2),
            Status = row.GetBoolean(3)
        };
    }
}

Below you will see the same task being performed in the first example, but this time with our custom mappers.

public Jeep GetJeepByIDWithMappers(int id)
{
    IParameterMapper jeepParameterMapper = new JeepParameterMapper();
    IRowMapper<Jeep> jeepRowMapper = new JeepRowMapper();

    Database db = DatabaseFactory.CreateDatabase();
    IEnumerable<Jeep> jeeps = db.ExecuteSprocAccessor<Jeep>("GetJeepByID", jeepParameterMapper, jeepRowMapper, id);
    return jeeps.First();
}

ResultSetMappers can be used to map more complex result sets to custom entities with deeper object graphs. Consider a stored procedure that returns multiple result sets similar to that seen in the following image. The first result set contains the custom entity details, and the second result set is some collection of child objects. In this case, we see an article with a child collection of article images.

article_resultset

You would have a hard time building up your custom entity without using an IDataReader and iterating through the result sets with .NextResult. ResultSetMappers allow you to code for this scenario. Below we’ll create a custom result set mapper for articles that will map all of the relevant result sets to the Article object.

internal class ArticleResultSetMapper : IResultSetMapper<Article>
{
    public IEnumerable<Article> MapSet(System.Data.IDataReader reader)
    {
        Dictionary<int, Article> articles = new Dictionary<int, Article>();

        Article article;
        ArticleImage articleImage;
        while (reader.Read())
        {
            article = new Article
            {
                ID = reader.GetInt32(0),
                Title = reader.GetString(1),
                Description = reader.GetString(2),
                Images = new Collection<ArticleImage>()
            };
            articles.Add(article.ID, article);
        }
        if (reader.NextResult())
        {
            while (reader.Read())
            {
                int articleID = reader.GetInt32(0);
                if (articles.ContainsKey(articleID))
                {
                    articleImage = new ArticleImage
                    {
                        DisplayOrder = reader.GetInt32(1),
                        Url = reader.GetString(2),
                        Caption = reader.GetString(3)
                    };
                    articles[articleID].Images.Add(articleImage);
                }
            }
        }

        return articles.Select(a => a.Value);
    }
}

Below you will see the code used to create a new IEnumerable<Article> using our ArticleResultSetMapper:

public Article GetArticleByID(int id)
{
    ArticleResultSetMapper articleResultSetMapper = new ArticleResultSetMapper();

    Database db = DatabaseFactory.CreateDatabase();
    IEnumerable<Article> articles = db.ExecuteSprocAccessor<Article>("GetArticleByID", articleResultSetMapper, id);
    return articles.First();
}

As you can probably tell, Enterprise Library 5 gives you more power and control over the mapping and generation of your custom entities. The previous version of my Enterprise Library extensions focused primarily on performing just the types of mappings that are now built into the product. After seeing just a few examples, you should be ready to jump into Enterprise Library 5 Data Access head first. In the next post, we’ll walk through usage scenarios for a few of my Enterprise Library extensions that makes these routine tasks easier to read, maintain, and train.

 Posted by at 4:42 am
Jul 272010
 

In .NET 1.1, I tried the original MS Data Access Application Block’s SqlHelper (you can still download it here). It was great for most of the common uses, but was lacking in some areas. The consuming code looked sloppy and encouraged blind faith that database objects never changed. It also didn’t support transactions as I would have liked, and didn’t support my obsession with custom entities. I started out writing an extension library that wrapped SqlHelper, but that felt very wrong wrapping the ADO.NET wrapper (SqlHelper). I ended up writing my own version of SqlHelper called SqlHelper (nice name, eh?). You see, at this time I was getting over a bad relationship with a series of ORM products that had a negative effect on my productivity. I decided to revolt with good ol? fashion data access methods that have never let us down.

The only thing worse than my ORM experience was the disgusting over-use of DataSet and DataTable. For my dollar, DataReader is where it’s at. I agree that using the reader is slightly more dangerous in the hands of an inexperienced or inattentive developer (did you know you have to close the reader when you’re done with it?). Nothing can compare with the speed and flexibility of the reader, which is why DataSet and DataAdapter use it at their core. If you are working with custom entities, instead of DataSets and DataTables, you would be crazy to not use the DataReader.

My SqlHelper worked in conjunction with my DataAccessLayer class that defined a few delegates that made reader-to-object-mapping a simple task.  Once the mapping methods were written to be used with the delegates, which returned object or System.Collections.CollectionBase because we did not yet have generics (can you imagine?), you simply called the SqlHelper to do all of the hard work. SqlHelper did not implement all of the craziness that the original version contained. It was a short 450 lines of code that did nothing but access data in a safe and reliable way. In the example below, we have the GenerateDocumentFromReader method that is used by the GenerateObjectFromReader delegate. When SqlHelper.ExecuteReaderCmd is called, the delegate is passed in to map the reader results to my object? in this case a Document.

// Object generation method
private static object GenerateDocumentFromReader(IDataReader returnData)
{
     Document document = new Document();
     if (returnData.Read())
     {
         document = new Document(
             (int)returnData["DocumentId"],
             (byte[])returnData["DocumentBinary"],
             returnData["FileName"].ToString(),
             returnData["Description"].ToString(),
             returnData["ContentType"].ToString(),
             (int)returnData["FileSize"],
             returnData["MD5Sum"].ToString(),
             (bool) returnData["EnabledInd"],
             (int)returnData["CreatorEmpId"],
             Convert.ToDateTime(returnData["CreateDt"]),
             (int)returnData["LastUpdateEmpId"],
             Convert.ToDateTime(returnData["LastUpdateDt"]));
     }     return document;
}
public static Document GetDocumentByDocumentId(int documentId)
{
     SqlCommand sqlCmd = new SqlCommand();
     SqlHelper.SetCommandArguments(sqlCmd, CommandType.StoredProcedure, "usp_Document_GetDocumentByDocumentId");
     SqlHelper.AddParameterToSqlCommand(sqlCmd, "@DocumentId", SqlDbType.Int, 0, ParameterDirection.Input, documentId);
     DataAccessLayer.GenerateObjectFromReader gofr = new DataAccessLayer.GenerateObjectFromReader(GenerateDocumentFromReader);
     Document document = SqlHelper.ExecuteReaderCmd(sqlCmd, gofr) as Document;
     return document;
}

This worked wonderfully for years. After converting, I couldn’t imagine a project that used ORM, DataSets, or DataTables again. I’ve been on many 1.1 projects since writing my SqlHelper in 2004, and I have successfully converted them all. In early 2006, MS graced us with .NET 2.0. Generics, System.Transactions, and partial classes changed my life. In my first few exposures to generics, like Vinay “the Generic Guy” Ahuja’s 2005 Jax Code Camp presentation and Juval “My Hero” Lowy’s MSDN article “An Introduction to Generics”, I listened/read and pondered the millions of uses of generics. I adapted my SqlHelper heavily to use these new technologies and morphed it into something else that closely represented the newest version of the DAAB, Enterprise Library 3.

By this point, I wanted to convert to Enterprise Library. It was far better than the simple SqlHelper. It had better transaction support, though I don’t know if that included System.Transactions. I could have put my object generation extensions on top of it and it would have worked well for years. On home projects I had already converted to use EntLib. At work I was not so lucky. The deep stack trace when something went wrong scared everyone, and that is still a fear for those starting out in EntLib today. To ease the fears, I just created my replacement to SqlHelper the Database class.

I used a lot of the same naming conventions as Enterprise Library. In fact, much of the consuming code was nearly identical (except for the fact that it did not implement the provider pattern and worked only with SQL Server). This was in anticipation of a quick adoption of Enterprise Library 3 in the workplace. Kind of a “see? not so bad” move on my part. Just like EntLib, you created a Database class using the DatabaseFactory that used your default connection string key. Commands and parameters were created and added with methods off of the Database class. Aside from the SqlCommand/DbCommand, everything looked and felt the same, but came in a small file with only 490 lines of code instead of 5 or more projects with 490 files. Using it felt the same, too. Only my object/collection generation extensions looked different from the standard reader, scalar, dataset routines. Below is the same code from above using the Database class and related classes to create a Document from a reader.

// Object generation method
private static Document GenerateDocumentFromReader(IDataReader returnData)
{
     Document document = new Document();
     if (returnData.Read())
     {
         document = new Document(
             GetIntFromReader(returnData, "DocumentId"),
             GetIntFromReader(returnData, "DocumentTypeId"),
             GetStringFromReader(returnData, "DocumentTypeName"),
             GetByteArrayFromReader(returnData, "DocumentBinary"),
             GetStringFromReader(returnData, "FileName"),
             GetStringFromReader(returnData, "Description"),
             GetStringFromReader(returnData, "ContentType"),
             GetIntFromReader(returnData, "FileSize"),
             GetStringFromReader(returnData, "MD5Sum"),
             GetStringFromReader(returnData, "CreatorEmpID"),
             GetDateTimeFromReader(returnData, "CreateDt"),
             GetStringFromReader(returnData, "LastUpdateEmpID"),
             GetDateTimeFromReader(returnData, "LastUpdateDt"));
     }
     return document;
}
public static Document GetDocumentByDocumentId(int documentId)
{
     Database db = DatabaseFactory.CreateDatabase(AppSettings.ConnectionStringKey);
     SqlCommand sqlCmd = db.GetStoredProcCommand("usp_Document_GetDocumentByDocumentId");
     db.AddInParameter(sqlCmd, "DocumentId", SqlDbType.Int, documentId);
     GenerateObjectFromReader<Document> gofr = new GenerateObjectFromReader<Document>(GenerateDocumentFromReader);
     Document document = CreateObjectFromDatabase<Document>(db, sqlCmd, gofr);
     return document;
}

This, too, worked great for years. Other than a brief period in 2007 when I tried to wrap all of my data access code with WCF services, .NET 3.0 came and went with no changes to my data access methodology. In late 2007, I had lost all love of my SqlHelper and my Database/DataAccessLayer classes. With .NET 3.5 and Enterprise Library 4.0, I no longer felt the need to roll my own. .NET now had extension methods for me to extend Enterprise Library however I pleased. Enterprise Library supported System.Transactions making its use a dream if behind a WCF service that allowed transaction flow. With a succinct 190 lines of extension code, I had it made in the shade with Enterprise Library 4.0. In fact, I haven’t used anything since.

The consuming code was almost exactly the same. You’ll notice the SqlCommand has changed to DbCommand. The SqlDbType has changed to DbType. Other than that, it feels and works the same.

// Object generation method
private static Document GenerateDocumentFromReader(IDataReader returnData)
{
     Document document = new Document();
     if (returnData.Read())
     {
         document = new Document(
             returnData.GetInt32("DocumentId"),
             returnData.GetInt32("DocumentTypeId"),
             returnData.GetString("DocumentTypeName"),
             returnData.GetByteArray("DocumentBinary"),
             returnData.GetString("FileName"),
             returnData.GetString("Description"),
             returnData.GetString("ContentType"),
             returnData.GetInt32("FileSize"),
             returnData.GetString("MD5Sum"),
             returnData.GetString("CreatorEmpID"),
             returnData.GetDateTime("CreateDt"),
             returnData.GetString("LastUpdateEmpID"),
             returnData.GetDateTime("LastUpdateDt"));
     }
     return document;
}
public static Document GetDocumentByDocumentID(int documentId)
{
     Database db = DatabaseFactory.CreateDatabase();
     DbCommand cmd = db.GetStoredProcCommand("usp_Document_GetDocumentByDocumentId");
     db.AddInParameter(cmd, "DocumentID", DbType.Int32, documentId);
     GenerateObjectFromReader<Document> gofr = new GenerateObjectFromReader<Document>(GenerateDocumentFromReader);
     Document document = db.CreateObject<Document>(cmd, gofr);
     return document;
}

With a full suite of unit test projects available for download with the Enterprise Library source files, the fear should be abated for the remaining holdouts. Getting started is as easy as including two DLL references, and adding 5 lines of config. You can’t beat that!

I downloaded Enterprise Library 5 last week. I’ve been making use of new features such as result set mapping (eliminating the need for my object generation extensions), parameter mapping, and accessors that bring them all together. There’s a bunch of inversion of control features in place as well. I think I’ll be quite comfortable in my new EntLib5 home.

 Posted by at 3:32 am
May 082010
 

Despite its pitiful adoption in the developer community, I am implementing Transactional NTFS (TxF) transactions using the Microsoft.KtmIntegration.TransactedFile class. This allows me to reap the benefits of TransactionScope and distributed transactions for file operations (e.g. creates, updates, deletes). This is the only missing piece for typical transactional business applications. With the “KTM” and “KtmRm for Distributed Transactions” services, available only on Vista, Windows 7, and Windows Server 2008, file operations will roll back if the TransactionScope is not completed.

There’s just one problem. Transactional NTFS does not work with file shares. I can?t remember the last time I put a “C:FileStore” reference in a config file. A friendly share like \serverFileStore is always preferred, especially since DFS came about. Attempting to use a share results in the following error message:

The remote server or share does not support transacted file operations

Don’t read this as “your remote server” or “your remote share”, but rather “all remote servers and shares”. As mentioned in this MSDN article, TxF is not supported by the CIFS/SMB protocols. The error was probably written with the expectation that one day some remote servers and shares would support TxF. I emailed Microsoft about it and received a response fairly quickly. The response was simply:

“We understand the need and have plans to eventually support TxF over SMB2, but we?re not there yet and are not ready to announce if or when this will be supported. When it is the documentation will be updated.”

I’m not getting my hopes up, but Windows Server 2011 looks to be our only hope before .NET changes beyond recognition and TxF is a distant memory. Until then, I wrapped up all of my TxF code in a WCF service and install that service on the server with the FileStore folder.

MSDN article – When to Use Transactional NTFS

      http://msdn.microsoft.com/en-us/library/aa365738(v=VS.85).aspx

TxF Sandbox – Sample Projects (including Microsoft.KtmIntegration.TransactedFile)

      TxFSandbox.zip

 Posted by at 3:32 pm
Apr 222010
 

The sole 2010 offering in the USA of IDesign‘s Architect’s Master Class conducted by the man himself, Juval Lowy, is only a few weeks away. I checked in at the IDesign web site, and found some updates the world needs to see.

If you want to learn something new every day, start at the top of the IDesign Code Library and step through one example each day. Be careful, you might need to re-write every line of code you’ve ever written.

 Posted by at 12:56 am
Feb 012010
 

I’m not sure if what I’m doing is actually the right way to create a “user control” in ASP.NET MVC, but it’s worth sharing this tidbit either way. Instead of using a MVC View User Control to create a hidden field, a text box, two anchors, and three JavaScript functions, I chose to put it all in a HtmlHelper in which I write out the HTML and JavaScript myself. Everything worked fine except the almost magical auto-repopulating of the hidden and text fields after a post that didn’t work as expected as in a typical MVC View Page.

The situation: I have a page that needs to be called as a popup from many pages in my MVC application. The page allows single or multiple selection of “items” driven by an XML file. In the event that one day, almost always immediately, I have two or more of these “controls” on one view page, I need the two fields and the three JavaScript functions to have unique names so they don’t cross paths and cause unexpected behavior. I had an ASP.NET User Control to do this in plain old ASP.NET (POAN) since v1.1, and I can’t live without it.

The confusion: If I were to place the hidden, textbox, anchors, and JavaScript functions directly in the calling page, something magical happens after a post. If the controls had values before the post, they appear to magically retain there values after the post. It wasn’t until a colleague of mine, Sat, and I dug into Reflector for a while did we realize what was happening. Html.TextBox, Html.Hidden, and others all do something similar to auto-magically re-populate their values after the post. Since I’m writing out my fields as <input type=”hidden”/> and <input type=”text”/>, the magic doesn’t happen.

      NOTE: The magic will also not happen if you just write <input type=”text”/> on the page. It only happens if you use Html.TextBox.

The solution: I am still new to MVC and still trying to wrap my head around the “right way” to do things. Reflector showed that the HtmlHelpers all looked at the ModelState in the ViewData before rendering their HTML. They looked for their value by key (key being the control/tag name), and, if present, used that as the control/tag’s value. Bing! Maybe I should do the same thing. So just before I go to town with TagBuilder to assemble my controls/tags, I look in the ViewData’s ModelState for my value. If it is there, it must have been posted there by me (my control).

   48         UrlHelper urlHelper = new UrlHelper(helper.ViewContext.RequestContext);

   49         string textValue = null;

   50         ModelState state;

   51 

   52         if (helper.ViewData.ModelState.TryGetValue(textFieldName, out state))

   53         {

   54             textValue = state.Value.AttemptedValue;

   55         }

Works like a charm! Now my hidden, textbox, two anchors, and three JavaScript functions are bundled nicely inside of an HtmlHelper class that looks and feels like I’m using a built-in ASP.NET MVC HtmlHelper class. Most importantly, I have the pleasure of typing only this on all my consuming pages.

   40     <%= Html.MySelector(“selectedIDs”, “selectedNames”, “State”)%>

 Posted by at 2:50 am
Nov 272009
 

I’ve been talking about Geneva for a long time. I got the basics down earlier in the year. I tried to come up with my own set of sample apps, but failed to get anywhere. With the official release, and renaming to Windows Identity Foundation (WIF), I have renewed inspiration.

I read Michele Leroux Bustamante’s MSDN magazine article, Claim-Based Authorization with WIF, last night. After reading the article, I was confident that I could get a claims-aware WCF service stood up with a custom STS in a matter of hours. Today I downloaded and installed WIF. I also installed the WIF SDK and all of the prerequisite hotfixes. I perused the readme files and looked through some of the samples code. Everything is layed out sensibly, the samples are commented sufficiently, and the samples include setup and cleanup batch scripts when necessary.

The samples include:

Quick Start

  1. Simple Claims Aware Web Application
  2. Simple Claims Aware Web Service
  3. Simple Web Application With Information Card SignIn
  4. Simple Web Application With Managed STS
  5. Claims Aware Web Application in a Web Farm
  6. Using Claims In IsInRole

End-to-end Scenario

  1. Authentication Assurance
  2. Federation For Web Services
  3. Federation For Web Applications
  4. Identity Delegation
  5. Web Application With Multiple SignIn Methods
  6. Federation Metadata

Extensibility

  1. Claims Aware AJAX Application
  2. Convert Claims To NT Token
  3. Customizing Request Security Token
  4. Customizing Token
  5. WSTrustChannel
  6. Claims-based Authorization

All of the samples I’ve run through so far are great. The only thing that I’m not in love with is all the XML required to wire this stuff up. Maybe some Juval-style extensions would make it less painful.

One more thing – it looks like all of the XP users will finally have to upgrade. WIF only works with Vista, Win7, and Win2008. I heard that Win2003 compatibility will arrive in December.

Download Windows Identity Foundation

Download Windows Identity Foundation SDK

 Posted by at 4:44 am