Sep 152014
 

I had a great time at Jax Code Impact over the weekend. Many thanks to Bayer and Brandy for putting together a free, enjoyable, and educational Saturday conference. As a first year conference, it was impressive to see more than 300 people registered for six tracks of Microsoft-focused presentations. Kevin Wolf was a big hit with his quadcopters, 3-D printer, and oculus rift. I personally enjoyed two separate talks on Redis by Henry Lee and Steve Danielson.

I talked about the Windows Azure Service Bus offerings, and hit on topics such as Relays, Queues, Topics, WCF bindings, Pub/Sub, AMQP, and of course some Cloud Design Patterns. 

Here are the slides and code demos from my talk:

Code Impact Service Bus Presentation – Slides

Code Impact Service Bus Presentation – Demo Code

Jul 082014
 

Man, this was driving me nuts. Using identical code as successful queue/topic receive operations against the Azure Service Bus, I was perplexed to see “no valid sources” as if my broker address was wrong. I checked it repeatedly, and started doubting that I had the wrong broker settings.

In the end, it was the SAS policy key. Azure generates a healthily-long key. This was the first one I got that  had / (slash) and + (plus) characters in the key. As this key is part of the URI Qpid Proton uses to connect to the broker, it needs to be URL-encoded. Simple enough to fix in python:

import urllib
sasPolicyKey = "<your key from Azure with slashes and/or plus signs>"
safeSasPolicyKey = urllib.quote(sasPolicyKey, "")
Jul 052014
 

I’m really digging on IoT these days, and I see AMQP and the Azure Service Bus as the primary enablers. With two Raspberry Pis sitting around, I decided to put a temperature probe on one, and send hourly temp readings to my laptop two feet away via the Azure Service Bus. Not the most efficient thing I could do, but I’m simulating a scenario when a remote device (RPi) being located on a strawberry farm in California communicating with a Azure-hosted system that processes the data (my laptop).

Getting Started

The Raspberry Pi is a perfect device developer’s starter kit. I think Python is a great starter language too, so I quickly had my heart set on installing the Python AMQP Messenger bindings for Apache Qpid Proton and use the Python Examples to help me write code to send some test messages through the cloud. There are really only two implementations of Qpid Proton: C and Java. The C implementation also serves as the foundation for the Perl, PHP, Python, and Ruby bindings. Java is selfish (as expected) and only takes care of itself. Raspberry Pi being Linux can support all of these languages but I wanted to start slow with Python.

I really hoped that the Proton install was a 10-30 minute thing, but it took me quite a while to piece together everything I needed. Since I took the time to document the procedure that ended up working, I figure that I should post them here for the next person so they don’t struggle like I did. The documentation included in the  readme file is ok, I guess. The documentation on the project site could really benefit from some scenario-based examples or quick start tutorials if they’re looking for adoption. Maybe I should do that?

Setup Procedure

I started with a one-year-old Raspian Wheezy hard-float image. It already had PHP, Python, and Java. I also installed NodeJs a few months ago. To install the following packages and to make calls to Azure, you’ll also need to be connected to the Internet. The following procedure is very similar to the readme file, that fills in the gaps from the Raspbian image. Log into your pi and follow along:

  1. $ sudo apt-get install cmake uuid-dev
    This installs cmake so you can build the Proton assemblies. It also gets the UUID assemblies so you can communicate your device ID (IoT world requires some kind of device ID)
  2. $ sudo apt-get install openssl
    All calls to Azure Service Bus require a secure connection. This will install the latest, heartbleed-free, OpenSsl binaries and tools. If this was not already installed on your RPi, you may need to configure it.
  3. $ sudo apt-get install libssl-dev
    This installs the development files and headers for OpenSsl
  4. $ sudo apt-get install swig python-dev
    Install SWIG, that is a tool that facilitates the scripting languages “bindings” that connect to the C implementation of Proton (i.e. Proton-C)
  5. Make a new directory to hold the Qpid download
    $ mkdir /home/pi/qpid
    $ cd /home/pi/qpid
  6. Navigate to the the Proton project page, and click on the link for the latest version of Proton. At the time of writing, this is version 0.7 with a package named qpid-proton-0.7.tar.gz. When you click the link, you be taken to a page to select the closest mirror. Copy the link address for the mirror of your choosing, and then wget with that address:
    $ wget http://apache.petsads.us/qpid/proton/0.7/qpid-proton-0.7.tar.gz
  7. $ tar xvfz qpid-proton-0.7.tar.gz
    This will uncompress the package and place the downloaded files into a qpid-proton-0.7 directory
  8. $ cd qpid-proton-0.7
    $ mkdir build
    $ cd build

    This will create a build directory for cmake to stage its files to build Proton
  9. $ cmake .. –DCMAKE_INSTALL_PREFIX=/usr –DSYSINSTALL_BINDINGS=ON
    This will run cmake on the download directory (..) and prepare to install the binaries into /usr. It will also prepare bindings for all installed languages. If you are using python, make sure you don’t see error messages about swig missing. Also make sure that you see the language of your choice being prepared for bindings in the standard output.
  10. $ make all docs
    (Optional) This will prepare copy the documentation
  11. $ sudo make install
    This builds Proton-C, Proton-Java, and copies the bindings to the system-default directories (e.g. python bindings appear on my RPi in /usr/lib/python2.7/dist-packages (look for cproton.py)

That’s it. To test that it all worked, you can do something simple like:

$ python
>>> from proton import *

If you don’t see an error message, then the bindings are in place and you are ready to write some code.

Jul 032014
 

It feels wrong for a client or server to use the “owner” shared secret credentials in an Azure Service Bus connection string. It’s pure evil with 100s or 1000s of Azure Service Bus queue and topic clients sending messages. So how about I supplement the documentation and show how to easily change from using <sharedSecret> to <sharedAccessSignature>?

Step 1: Create some SAS policies

Log into the Azure management portal, click on your Service Bus queue (or topic), and then click Configure. Add one or more policies, choose their respective permissions, and click Save. After saving, the policy name and keys appear under the “shared access key generator” section below. Copy the primary key and move on to step 2.

image

Step 2: Modify your config file

If you’re like me, you like to keep your WCF hosting code free of configuration. If hosted in a console app, the following code is all I use to start a service.

ServiceHost testHost = new ServiceHost(typeof(TestManager));
testHost.Open();

When adding an additional endpoint for NetMessagingBinding, it’s really simple to just add a new endpoint and behavior configuration. The documentation in place today always shows <sharedSecret> being used. This is not a real-world scenario since every client and service should have their own credentials.

To use your new shared access keys, change this:

    <behaviors>
      <endpointBehaviors>
        <behavior name="ServiceBusTokenProvider">
          <transportClientEndpointBehavior>
            <tokenProvider>
              <sharedSecret issuerName="owner" issuerSecret="blAhblAh+Blah/blaH+BLAhblAhBLaHblAHBlahblaH=" />
            </tokenProvider>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
    </behaviors>

to something like this, using your newly-generated keys:

    <behaviors>
      <endpointBehaviors>
        <behavior name="SASPolicyTokenProvider">
          <transportClientEndpointBehavior>
            <tokenProvider>
              <sharedAccessSignature keyName="ingest-manager" key="SASkey+sAskEY/SASKeysaSKey+SaskeYsaSKEy/SaS=" />
            </tokenProvider>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
    </behaviors>

And that’s it.

Mar 312014
 

I had a great time at Global Windows Azure Bootcamp (GWAB) in Jacksonville, FL. I got to meet a bunch of cool people and discuss Azure topics all day. Free food, Bold Bean coffee, and beer helped to create the perfect geekfest atmosphere. I can’t wait for the next Azure event!

I talked about Windows Azure Data Services, and hit on topics such as Tables, Blobs, Queues, Windows Azure SQL Database, and some Cloud Design Patterns. The links below are the slides and code demos from my talk.

GWAB Azure Storage Presentation – Slides

GWAB Azure Storage Presentation – Demo Code

Feb 162014
 

Before Windows Azure Storage Client Library (SCL) 2.1, any entity that we wanted to put in Azure Table Storage (ATS) had to derive from the TableServiceEntity class. For me that meant maintaining a ATS-specific entity just to get the PartitionKey (PK), RowKey (RK), Timestamp, and ETag. I also had to maintain a DTO or POCO to be used by the rest of the application, and also maintain logic to marshal values between them to facilitate the common CRUD work.

In the RTM announcement for Windows Azure Storage Client Library 2.1, Microsoft announced that they are now exposing the serialization/deserialization logic for any CLR type. This makes it possible for us to store and retrieve entities without needing to maintain two entity types: the DTO and another class that derives from TableEntity. It also makes it possible to store entities in ATS for which you do not own/maintain the code. We still have the same data type restrictions (e.g. subset of OData Protocol Specifications) so that will restrict how many of those “not owned/maintained” classes can exist in ATS.

In the old days of 2013…

Back in my day, we had to use TableServiceEntity. We’d create generic TableServiceDataModel, TableServiceContext, and TableServiceDataSource classes that would get the connection established and serve up table entities as IQueryables. Inserts, Updates, and Deletes were called and then a call to .SaveChanges(). It had an Entity Framework feel to it, which gave a warm fuzzy feeling that we weren’t clueless.

An Azure adapter layer was full of TableServiceDataModel classes and the necessary infrastructure to interact with ATS:

public class ProductCommentModel : TableServiceDataModel
{
	public const string PartitionKeyName = "ProductComment";

	public ProductCommentModel()
		: base(PartitionKeyName, Guid.NewGuid().ToString())
	{ }

	public string ProductId { get; set; }
	public string Commenter { get; set; }
	public string Comment { get; set; }
}

public class TableServiceDataModel : TableServiceEntity
{
	public TableServiceDataModel(string partitionKey, string rowKey)
		: base(partitionKey, rowKey)
	{ }
}

public class TableServiceContext<TModel> : TableServiceContext where TModel : TableServiceEntity
{
	public TableServiceContext(string tableName, string baseAddress, StorageCredentials credentials)
		: base(baseAddress, credentials)
	{
		TableName = tableName;
	}

	public string TableName { get; set; }

	public IQueryable<TModel> Table
	{
		get
		{
			return this.CreateQuery<TModel>(TableName);
		}
	}
}

public class TableServiceDataSource<TModel> where TModel : TableServiceEntity
{
	private string m_TableName;
	private TableServiceContext<TModel> m_ServiceContext;
	private CloudStorageAccount m_StorageAccount;

	protected CloudStorageAccount StorageAccount
	{
		get
		{
			if (m_StorageAccount == null)
			{
				m_StorageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
			}
			return m_StorageAccount;
		}
	}

	protected TableServiceContext<TModel> ServiceContext
	{
		get
		{
			if (m_ServiceContext == null)
			{
				m_ServiceContext = new TableServiceContext<TModel>(m_TableName, StorageAccount.TableEndpoint.ToString(), StorageAccount.Credentials);
			}
			return m_ServiceContext;
		}
	}

	public TableServiceDataSource(string tableName)
	{
		m_TableName = tableName;
		StorageAccount.CreateCloudTableClient().CreateTableIfNotExist(m_TableName);
	}

	public IEnumerable<TModel> Select()
	{
		var results = from c in ServiceContext.Table
						select c;

		var query = results.AsTableServiceQuery<TModel>();
		var queryResults = query.Execute();

		return queryResults;
	}

	public IEnumerable<TModel> Select(Expression<Func<TModel, bool>> predicate)
	{
		CloudTableQuery<TModel> query = ServiceContext
			.CreateQuery<TModel>(ServiceContext.TableName)
			.Where(predicate)
			.AsTableServiceQuery<TModel>();

		var queryResults = query.Execute();
		return queryResults;
	}

	public void Delete(TModel itemToDelete)
	{
		ServiceContext.DeleteObject(itemToDelete);
		ServiceContext.SaveChanges();
	}

	public void Update(TModel itemToUpdate)
	{
		ServiceContext.UpdateObject(itemToUpdate);
		ServiceContext.SaveChanges();
	}

	public void Update(TModel itemToUpdate, SaveChangesOptions saveOptions)
	{
		ServiceContext.UpdateObject(itemToUpdate);
		ServiceContext.SaveChanges(saveOptions);
	}

	public void Insert(TModel newItem)
	{
		ServiceContext.AddObject(m_TableName, newItem);
		ServiceContext.SaveChanges();
	}

	public void InsertToBatch(TModel newitem)
	{
		ServiceContext.AddObject(m_TableName, newitem);
	}

	public void SaveBatch()
	{
		ServiceContext.SaveChangesWithRetries(SaveChangesOptions.Batch);
	}
}

Data Access Layer ended up looking much cleaner than the Azure Documentation… something like this:

public void AddComment(ProductCommentModel model)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	dataSource.Insert(model);
}

public IEnumerable<ProductCommentModel> GetComments(string productId)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	var comments = dataSource.Select().Where(p => p.PartitionKey == ProductCommentModel.PartitionKeyName && p.ProductId == productId).OrderByDescending(comment => comment.Timestamp);
		return comments;
}

public void DeleteComment(string commentid)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	var comment = dataSource.Select().Where(p => p.PartitionKey == ProductCommentModel.PartitionKeyName && p.RowKey == commentid);
	if (comment.Count() > 0)
	{
		dataSource.Delete(comment.First());
	}
}

With that adapter layer we thought we had it made. The data access layer looks cleaner than most SQL implementations. Still, we had too much Azure code and terminology too far away from the Azure calls. It was a small price to pay I suppose.

Enter the EntityAdapter

The RTM announcement showed an example of what is possible with access to the serialization/deserialization logic. Their sample showed a class named EntityAdapter. Rory Primrose has made some great improvements on EntityAdapter. I took this same class and made just a few modifications to support my use cases. Primarily, the examples had no support for ETags which are critically important in some scenarios. Here is my current version of EntityAdapter:

internal abstract class EntityAdapter<T> : ITableEntity where T : class, new()
{
    private string m_PartitionKey;

    private string m_RowKey;

    private string m_ETag;

    private T m_Value;

    protected EntityAdapter()
        : this(new T())
    { }

    protected EntityAdapter(T value)
    {
        if (value == null)
        {
            throw new ArgumentNullException("value", "EntityAdapter cannot be constructed from a null value");
        }

        m_Value = value;
    }

    public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
    {
        m_Value = new T();

        TableEntity.ReadUserObject(m_Value, properties, operationContext);

        ReadValues(properties, operationContext);
    }

    public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
    {
        var properties = TableEntity.WriteUserObject(Value, operationContext);

        WriteValues(properties, operationContext);

        return properties;
    }

    protected abstract string BuildPartitionKey();

    protected abstract string BuildRowKey();

    protected virtual void ReadValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    { }

    protected virtual void WriteValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    { }

    protected virtual void SetETagValue(string eTag)
    { }

    public string ETag
    {
        get
        {
            return this.m_ETag;
        }
        set
        {
            this.m_ETag = value;
            SetETagValue(value);
        }
    }

    public string PartitionKey
    {
        get
        {
            if (m_PartitionKey == null)
            {
                m_PartitionKey = BuildPartitionKey();
            }

            return m_PartitionKey;
        }
        set
        {
            m_PartitionKey = value;
        }
    }

    public string RowKey
    {
        get
        {
            if (m_RowKey == null)
            {
                m_RowKey = BuildRowKey();
            }
            return m_RowKey;
        }
        set
        {
            m_RowKey = value;
        }
    }

    public DateTimeOffset Timestamp { get; set; }

    public T Value
    {
        get
        {
            return m_Value;
        }
    }
}

To use EntityAdapter with a DTO/POCO (e.g. Racer), you write an adapter (e.g. RacerAdapter):

public class Racer
{
    [Display(Name = "Driver")]
    public string Name { get; set; }

    [Display(Name = "Car Number")]
    public string CarNumber { get; set; }

    [Display(Name = "Race")]
    public string RaceName { get; set; }

    public DateTime? DateOfBirth { get; set; }

    [Display(Name = "Last Win")]
    public string LastWin { get; set; }

    public string ETag { get; set; }

    public bool HasWon
    {
        get
        {
            return !String.IsNullOrEmpty(this.LastWin);
        }
    }

    public List<string> Validate()
    {
        List<string> validationErrors = new List<string>();

        //TODO: Write validation logic

        return validationErrors;
    }
}

internal class RacerAdapter : EntityAdapter<Racer>
{
    public RacerAdapter()
    { }

    public RacerAdapter(Racer racer)
        : base(racer)
    {
        this.ETag = racer.ETag;
    }

    protected override string BuildPartitionKey()
    {
        return Value.RaceName;
    }

    protected override string BuildRowKey()
    {
        return Value.CarNumber;
    }

    protected override void ReadValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    {

        this.Value.RaceName = this.PartitionKey;
        this.Value.CarNumber = this.RowKey;
    }

    protected override void WriteValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    {
        properties.Remove("CarNumber");
        properties.Remove("RaceName");
    }

    protected override void SetETagValue(string eTag)
    {
        this.Value.ETag = eTag;
    }
}

Now we have everything we need to make our data access layer more simplified and domain-focused instead of table-entity-focused.

// Using TableEntity-derived class requires front-facing layers to deal with partition/row keys instead of domain-specific identifiers
public void AddRacer(RacerEntity racer)
{
    CloudTable table = GetRacerTable();

    TableOperation upsertOperation = TableOperation.InsertOrReplace(racer);
    table.Execute(upsertOperation);
}

// Using a DTO with the EntityAdapter
public void AddRacer(Racer racer)
{
    CloudTable table = GetRacerTable();

    var adapter = new RacerAdapter(racer);
    var upsertOperation = TableOperation.InsertOrReplace(adapter);

    table.Execute(upsertOperation);
}

With or without EntityAdapter, SCL 2.1 gave us TableEntity, TableOperation, etc. that really simplify our code. EntityAdapter is icing on the cake, and really helps to simplify Azure-hosted web APIs.

Feb 062014
 

If you’ve been operating an application as an Azure Cloud Service for a year or two, then you are probably due to renew or upgrade your SSL certificate. People move on, and contractor rates go up. You may not be the person that installed the original SSL cert or may not have documentation on how to install a new cert. The process is simple and takes only an hour once you’ve acquired your new certificate.

  1. Downloaded the new certificate from your certificate provider
    Each provider is different in how they will deliver the certificates. GoDaddy will have you select your server type (e.g. IIS6, IIS7, Tomcat, Apache) before downloading the certificate. When you requested the certificate from your provider, you had to use one of these servers to generate the CSR (certificate signing request). You will be receiving a CRT (web server certificate) from your provider. It’s important to choose the right server type so the CRT can be imported. If you’re deploying to Azure, then you’ll probably choose IIS7 like I did. Download the cert files (or zip file) and save it somewhere safe from prying eyes.

    NOTE: You will likely also receive some intermediate certificates. These have much longer lifespans than a 1-2 year SSL certificate. You’ll follow your provider’s instructions to install these later, if necessary.

  2. Complete the certificate request on IIS
    If you received intermediate certificates from your provider, now is the time to do so. This will ensure that you have a full certification path. Follow your provider’s instructions for this. These intermediate certificates have lifespans of up to 10-20 years, so if the thumbprint is the same no action will be necessary. You can check this by double-clicking the certificate and checking the thumbprint on the details tab. Compare that value to what was previously uploaded to Azure under Cloud Services – <Your Cloud Service> – Certificates tab. Any previously uploaded intermediate certificates will appear here as well as your existing SSL certificate.

    In IIS7 on your server, VM, developer workstation, click on Server Certificates. In the Actions pane on the right, click Complete Certificate Request. Browse to find the name of the CRT file you downloaded in the previous step. Type a friendly name like “*.my-domain.com” and click OK. If the import is successful, you’ll see your certificate appear in the list on the Server Certificates screen in IIS.
    image

  3. Export the certificate as a PFX file
    Open MMC and add the snap-in to work with the Local Machine certificate stores appearing as Certificates (Local Computer). Find your certificate in the Personal \ Certificates store and look for the friendly name you entered in the previous step. Right-click on the certificate, choose All Tasks and then Export to open the Certificate Export Wizard. Follow the wizard, choosing Yes, export the private key and Include all certificates in the certification path if possible options. Type a good password and choose a file path to export the PFX file. For future-you’s sake, name the file with a .pfx extension. When the wizard completes, your PFX file will be ready for use.

    Before you leave the certificate manager (MMC), double-click the certificate to open it and copy the thumbprint from the Details tab. You’ll need the thumbprint in later steps.

  4. Upload the PFX and intermediate certificates to Azure
    Now that you have both certificates, navigate to the uploaded to the Azure Management Portal, and click on Cloud Services. Find the desired cloud service in the list, and click on it to select it. Click on the Certificates tab, and then click the Upload button in the bottom menu. Browse to find your PFX file, and type the password. Click the OK button.
    image
  5. Change the service configuration to use the new thumbprint
    Open your application’s solution, and open the ServiceConfiguration.Cloud.cscfg file in the Azure hosting project. Find the existing SSL certificate under <Certificates>. Paste in your new thumbprint, making to sure it’s all uppercase with no spaces. If your thumbprintAlgorithm has changed, change that value in the config file as well.
  6. Deploy your app to Staging
    Now that your certificate is on Azure and your application has been updated, it’s time to deploy to staging, Once your deployed is complete and your staging environment status returns to “Running”, try out the Staging environment using the HTTPS version of the Site URL seen in the Azure Management Portal. Using Chrome, find the certificate information by clicking on the lock symbol in the address bar and then click the Connection tab and then Certificate Information.
    image
    It’s expected that it is complaining at this point because we aren’t using the intended domain name, staging uses <random>.cloudapp.net. Check that the end date, thumbprint, name, and other properties are what you expect to see.
  7. Swap VIPs
    Once you’re satisfied that the Staging environment is a good build and that the certificate is correctly assigned, swap the staging and production environments. When completed (< 30 seconds), check out your application using the HTTPS endpoint and domain name. You should see the lock in the address bar, and make sure to check the properties (e.g. expiration date) again.

That’s it. Party on!

Dec 272013
 

I learn something every day whether I like it or not. Today’s lesson:

SelectList thinks she’s smarter than you.

Observations

I was working in an MVC4 app, making some forms light up with some custom HtmlHelpers. Everything is dandy until a drop-down doesn’t re-populate with the previously selected value after a POST or a fresh GET. That’s funky. The right value is in the database. So I looked at the .cshtml. I had two drop-downs next to each other. I changed the custom HtmlHelpers to PODDLFs (plain old DropDownListFor) and it does the same thing. The one for Suffix “binds” the previously selected value as I’d expect, but the one for Title appears to do nothing.

@Html.DropDownListFor(model => model.Title, Model.SelectLists.PreferredTitles)
@Html.DropDownListFor(model => model.Suffix, Model.SelectLists.Suffixes)

So to be safe, let’s print out the value of Title as a string literal.

Testing: @Model.Title

Yep, works fine. I see “Mr.” just as I’d expect. So I searched for every instance of “.Title” to see if this is happening somewhere else in the app, but there are no other uses in a .cshtml file. What I did find was many instances of @ViewBag.Title being used to set the window and page titles throughout the app. I renamed “Title” to “Prefix” on the model and the fog clears a little. There’s something going on with ViewBag’s Title taking precedence over my model’s Title. To be sure, I undid the renaming operation and changed the impacted view’s ViewBag.Title to be “Mr.”, and then “Dr.”. Regardless of the current value of Model.Title, the value of ViewBag.Title is always used to set the selected value.

Analysis

You can build your SelectList and set whatever “selectedValue” you want. DropDownListFor calls SelectInternal (excerpt below) to build the MvcHtmlString. SelectInternal is responsible for binding the appropriate value for the model/property used in the expression of DropDownListFor.  When the value is not found with GetModelStateValue, ViewData.Eval is used to get the “selected value”. Deep in the internals of ViewData.Eval, ViewBag takes precedence over your model.

object defaultValue = allowMultiple ? htmlHelper.GetModelStateValue(fullHtmlFieldName, typeof(string[])) : htmlHelper.GetModelStateValue(fullHtmlFieldName, typeof(string));
if ((!flag && (defaultValue == null)) && !string.IsNullOrEmpty(name))
{
    defaultValue = htmlHelper.ViewData.Eval(name);
}
if (defaultValue != null)
{
    selectList = GetSelectListWithDefaultValue(selectList, defaultValue, allowMultiple);
}

So what actually happened was SelectInternal took my page title and tried to make it the selected value in the drop-down list. Knowing why it does this doesn’t make me any happier. I’d really prefer that DropDownListFor use my model’s value like I told it to. Alas, I didn’t write this code and it’s pretty dumb of me to not recognize the clear naming conflict. So I’ll accept this and move on.

Corrective Action

Clearly the best solution is to use much more descriptive names that don’t clobber each other. Changing ViewBag.Title to be ViewBag.PageTitle is the path of least resistance. Simply using “Title” on the model wasn’t very good either. It would be better as “Salutation”, “NamePrefix” or “PreferredTitle” anyway. These types of hidden naming conflicts are sure to stump some people. Remembering this little nugget of the SelectList internals will keep naming/conflicts on my mind for some time.

Feb 032012
 

In Part 1: Out-of-the-box Features, I went through some of the great new features with Enterprise Library 5 Data Access including accessors and mappers. Before version 5, most of my EntLib extensions code was in place to perform these new features (not as eloquently, of course). I have become attached to a few of the extensions I had put in place over the years. I will keep my extensions around now for only a few reasons. 1) Customized database exceptions 2) IDataReader usability enhancements 3) Reduced mapping footprint.

Extensions

I typically have a Utils.dll that I import in every project. For data/resource access projects, I also include my Utils.Data.dll. Utils.Data started its career as a data access application block similar to SqlHelper from the pre-EntLib days. Today, Utils.Data is a set of extensions that merely makes Enterprise Library more fun to be with.

IDataReaderExtensions

Out of the box, System.Data.IDataRecord only gives you the ability to access fields by their integer index value. As an architect that does not have supervisory control over the database or the objects within, this scares me. Any additions or re-ordering of the output fields will surely cause your index-based mapping to blow up. You could solve this with a call to .GetOrdinal(fieldName) first to get the index, but that is twice the code (not to mention boring plumbing code). My extensions do nothing novel. They simply provide string-based extensions like .GetInt32(string name) that do the retrieval and casting for you. I also added a few frequently-used new extensions like .GetNullableInt(string name) to keep my result mapping as clean as concise as possible.

Reader use with built-in features:

jeep = new Jeep()
{
	ID = row.GetInt32(0),
	Name = row.GetString(1),
	Description = row.GetString(2),
	Status = row.GetBoolean(3)
};

Reader use with extensions:

jeep = new Jeep()
{
	ID = reader.GetInt32(“JeepID”),
	Name = reader.GetString(“Name”),
	Description = reader.GetString(“Description”),
	Status = reader.GetBoolean(“Status”),
};

I advise that you never use string literals in data access code. Data access code is hit hard, so take your performance improvements when you can. I prefer having const strings locally in my data access class or having an internal static class with const strings to share with all classes in my data access project. The attached solution has examples.

Parameter and Result/Row Mapping

The now built-in ParameterMapper, RowMapper, and ResultSetMapper are beautiful. Sometimes you need a little sumpin’ special to make your code easier to read and work consistently when getting one or ten entities in a database call. Similar to how ExecuteSprocAccessor works with row and result set mappers, CreateObject and CreateCollection support generics and build an object or collection of the specified type. Instead of deriving a new class from a base mapper class, I chose to have one delegate method that generates a single object from a reader. This delegate is used by both CreateObject and CreateCollection. Let’s look at the differences with code.

Creating an object with EntLib5 features:

public Jeep GetJeepByID(int id)
{
	Database db = DatabaseFactory.CreateDatabase();
	IParameterMapper jeepParameterMapper = new JeepParameterMapper();
	IRowMapper<Jeep> jeepRowMapper = new JeepRowMapper();
	IEnumerable<Jeep> jeeps = db.ExecuteSprocAccessor<Jeep>(StoredProcedures.GetJeepByID, jeepParameterMapper, jeepRowMapper, id);
	return jeeps.First();
}

internal class JeepRowMapper : IRowMapper<Jeep>
{
	public Jeep MapRow(System.Data.IDataRecord row)
	{
		return new Jeep()
		{
			ID = row.GetInt32(0),
			Name = row.GetString(1),
			Description = row.GetString(2),
			Status = row.GetBoolean(3)
		};
	}
}

Creating an object with extensions:

public Jeep GetJeepByID(int id)
{
	Database db = DatabaseFactory.CreateDatabase();
	DbCommand cmd = db.GetStoredProcCommand(StoredProcedures.GetJeepByID, id);
	Jeep jeep = db.CreateObject(cmd, GenerateJeepFromReader);
	return jeep;
}

private Jeep GenerateJeepFromReader(IDataReader reader)
{
	Jeep jeep = null;
	if (reader.Read())
	{
		jeep = new Jeep()
		{
			ID = reader.GetInt32(Fields.JeepID),
			Name = reader.GetString(Fields.JeepName),
			Description = reader.GetString(Fields.JeepDescription),
			Status = reader.GetBoolean(Fields.JeepStatus),
		};
	}
	return jeep;
}

One more thing to note is that my CreateObject, CreateCollection, and their GetAccessor equivalents have my customized exception handling logic that makes use of the StoredProcedureException. We’ll go through that now.

Customized and Standardized Exceptions

The only value in logging exceptions is if your entire system logs exceptions and other messages in a consistent and meaningful manner. If error messages are logged as “ERROR!” or “All bets are off!!!” then you shouldn’t bother logging. In the real world, few developers, architects, or support staff have access to production databases. Having meaningful and detailed error messages is key to troubleshooting an issue and meeting your SLAs. I created a simple StoredProcedureException that provides the executed (or attempted) command as part of the stack trace.

WARNING: You should never, ever, ever show the stack trace in your application or let your users see the real error messages.
Log the real message and stack trace, then show “Data access exception” to your users. Please!

 

In the attached code samples, you’ll see two data access methods that call “ExceptionStoredProcedure” that does nothing other than RAISERROR(‘This is an exception’, 16, 1). With the built-in features, you can expect a SqlException and a stack trace that looks like this:

at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) 
   at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) 
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() 
   at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) 
   at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() 
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) 
   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) 
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) 
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) 
   at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) 
   at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) 
   at System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) 
   at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoExecuteReader(DbCommand command, CommandBehavior cmdBehavior) 
      in e:BuildsEntLibLatestSourceBlocksDataSrcDataDatabase.cs:line 460 
   at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteReader(DbCommand command) 
      in e:BuildsEntLibLatestSourceBlocksDataSrcDataDatabase.cs:line 846 
   at Microsoft.Practices.EnterpriseLibrary.Data.CommandAccessor`1.d__0.MoveNext() 
      in e:BuildsEntLibLatestSourceBlocksDataSrcDataCommandAccessor.cs:line 68 at System.Linq.Enumerable.First[TSource](IEnumerable`1 source) 
   at DataAccess.JeepDataAccess.GetJeepByIDShowingException(Int32 id) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsDataAccessJeepDataAccess.cs:line 58 
   at Client.Program.TestExceptionGetWithEntLib5Only() 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 58 
   at Client.Program.Main(String[] args) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 22

With my extensions, you can expect a StoredProcedureException that includes the text of the full stored procedure being executed at the time. This has saved me countless times as my log table stores the full stack trace and I can reproduce exactly what happened without guessing. The InnerException of the StoredProcedureException will be the same SqlException seen above. The customized stack trace will look like this:

[Stored procedure executed: ExceptionStoredProcedure @RETURN_VALUE=-6, @JeepID=1]
   at Soalutions.Utilities.Data.DatabaseExtensions.CreateObject[T](Database db, DbCommand cmd, GenerateObjectFromReader`1 gofr) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsEntLibExtensionsDatabaseExtensions.cs:line 49
   at DataAccess.JeepDataAccess.GetJeepByIDShowingExceptionWithExtensions(Int32 id) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsDataAccessJeepDataAccess.cs:line 65
   at Client.Program.TestExceptionGetWithExtensions() 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 66
   at Client.Program.Main(String[] args) 
      in C:DevCookbookUtilitiesEntLibExtensions5.0EntLibExtensionsClientProgram.cs:line 23

So that’s really it. There is some other hidden goodness in there, but it’s not really worth talking about in this post.

Download sample solution: EntLibExtensions.zip – 140 KB (143,360 bytes)