Feb 162014
 

Before Windows Azure Storage Client Library (SCL) 2.1, any entity that we wanted to put in Azure Table Storage (ATS) had to derive from the TableServiceEntity class. For me that meant maintaining a ATS-specific entity just to get the PartitionKey (PK), RowKey (RK), Timestamp, and ETag. I also had to maintain a DTO or POCO to be used by the rest of the application, and also maintain logic to marshal values between them to facilitate the common CRUD work.

In the RTM announcement for Windows Azure Storage Client Library 2.1, Microsoft announced that they are now exposing the serialization/deserialization logic for any CLR type. This makes it possible for us to store and retrieve entities without needing to maintain two entity types: the DTO and another class that derives from TableEntity. It also makes it possible to store entities in ATS for which you do not own/maintain the code. We still have the same data type restrictions (e.g. subset of OData Protocol Specifications) so that will restrict how many of those “not owned/maintained” classes can exist in ATS.

In the old days of 2013…

Back in my day, we had to use TableServiceEntity. We’d create generic TableServiceDataModel, TableServiceContext, and TableServiceDataSource classes that would get the connection established and serve up table entities as IQueryables. Inserts, Updates, and Deletes were called and then a call to .SaveChanges(). It had an Entity Framework feel to it, which gave a warm fuzzy feeling that we weren’t clueless.

An Azure adapter layer was full of TableServiceDataModel classes and the necessary infrastructure to interact with ATS:

public class ProductCommentModel : TableServiceDataModel
{
	public const string PartitionKeyName = "ProductComment";

	public ProductCommentModel()
		: base(PartitionKeyName, Guid.NewGuid().ToString())
	{ }

	public string ProductId { get; set; }
	public string Commenter { get; set; }
	public string Comment { get; set; }
}

public class TableServiceDataModel : TableServiceEntity
{
	public TableServiceDataModel(string partitionKey, string rowKey)
		: base(partitionKey, rowKey)
	{ }
}

public class TableServiceContext<TModel> : TableServiceContext where TModel : TableServiceEntity
{
	public TableServiceContext(string tableName, string baseAddress, StorageCredentials credentials)
		: base(baseAddress, credentials)
	{
		TableName = tableName;
	}

	public string TableName { get; set; }

	public IQueryable<TModel> Table
	{
		get
		{
			return this.CreateQuery<TModel>(TableName);
		}
	}
}

public class TableServiceDataSource<TModel> where TModel : TableServiceEntity
{
	private string m_TableName;
	private TableServiceContext<TModel> m_ServiceContext;
	private CloudStorageAccount m_StorageAccount;

	protected CloudStorageAccount StorageAccount
	{
		get
		{
			if (m_StorageAccount == null)
			{
				m_StorageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
			}
			return m_StorageAccount;
		}
	}

	protected TableServiceContext<TModel> ServiceContext
	{
		get
		{
			if (m_ServiceContext == null)
			{
				m_ServiceContext = new TableServiceContext<TModel>(m_TableName, StorageAccount.TableEndpoint.ToString(), StorageAccount.Credentials);
			}
			return m_ServiceContext;
		}
	}

	public TableServiceDataSource(string tableName)
	{
		m_TableName = tableName;
		StorageAccount.CreateCloudTableClient().CreateTableIfNotExist(m_TableName);
	}

	public IEnumerable<TModel> Select()
	{
		var results = from c in ServiceContext.Table
						select c;

		var query = results.AsTableServiceQuery<TModel>();
		var queryResults = query.Execute();

		return queryResults;
	}

	public IEnumerable<TModel> Select(Expression<Func<TModel, bool>> predicate)
	{
		CloudTableQuery<TModel> query = ServiceContext
			.CreateQuery<TModel>(ServiceContext.TableName)
			.Where(predicate)
			.AsTableServiceQuery<TModel>();

		var queryResults = query.Execute();
		return queryResults;
	}

	public void Delete(TModel itemToDelete)
	{
		ServiceContext.DeleteObject(itemToDelete);
		ServiceContext.SaveChanges();
	}

	public void Update(TModel itemToUpdate)
	{
		ServiceContext.UpdateObject(itemToUpdate);
		ServiceContext.SaveChanges();
	}

	public void Update(TModel itemToUpdate, SaveChangesOptions saveOptions)
	{
		ServiceContext.UpdateObject(itemToUpdate);
		ServiceContext.SaveChanges(saveOptions);
	}

	public void Insert(TModel newItem)
	{
		ServiceContext.AddObject(m_TableName, newItem);
		ServiceContext.SaveChanges();
	}

	public void InsertToBatch(TModel newitem)
	{
		ServiceContext.AddObject(m_TableName, newitem);
	}

	public void SaveBatch()
	{
		ServiceContext.SaveChangesWithRetries(SaveChangesOptions.Batch);
	}
}

Data Access Layer ended up looking much cleaner than the Azure Documentation… something like this:

public void AddComment(ProductCommentModel model)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	dataSource.Insert(model);
}

public IEnumerable<ProductCommentModel> GetComments(string productId)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	var comments = dataSource.Select().Where(p => p.PartitionKey == ProductCommentModel.PartitionKeyName && p.ProductId == productId).OrderByDescending(comment => comment.Timestamp);
		return comments;
}

public void DeleteComment(string commentid)
{
	TableServiceDataSource<ProductCommentModel> dataSource = new TableServiceDataSource<ProductCommentModel>("ProductComments");
	var comment = dataSource.Select().Where(p => p.PartitionKey == ProductCommentModel.PartitionKeyName && p.RowKey == commentid);
	if (comment.Count() > 0)
	{
		dataSource.Delete(comment.First());
	}
}

With that adapter layer we thought we had it made. The data access layer looks cleaner than most SQL implementations. Still, we had too much Azure code and terminology too far away from the Azure calls. It was a small price to pay I suppose.

Enter the EntityAdapter

The RTM announcement showed an example of what is possible with access to the serialization/deserialization logic. Their sample showed a class named EntityAdapter. Rory Primrose has made some great improvements on EntityAdapter. I took this same class and made just a few modifications to support my use cases. Primarily, the examples had no support for ETags which are critically important in some scenarios. Here is my current version of EntityAdapter:

internal abstract class EntityAdapter<T> : ITableEntity where T : class, new()
{
    private string m_PartitionKey;

    private string m_RowKey;

    private string m_ETag;

    private T m_Value;

    protected EntityAdapter()
        : this(new T())
    { }

    protected EntityAdapter(T value)
    {
        if (value == null)
        {
            throw new ArgumentNullException("value", "EntityAdapter cannot be constructed from a null value");
        }

        m_Value = value;
    }

    public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
    {
        m_Value = new T();

        TableEntity.ReadUserObject(m_Value, properties, operationContext);

        ReadValues(properties, operationContext);
    }

    public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
    {
        var properties = TableEntity.WriteUserObject(Value, operationContext);

        WriteValues(properties, operationContext);

        return properties;
    }

    protected abstract string BuildPartitionKey();

    protected abstract string BuildRowKey();

    protected virtual void ReadValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    { }

    protected virtual void WriteValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    { }

    protected virtual void SetETagValue(string eTag)
    { }

    public string ETag
    {
        get
        {
            return this.m_ETag;
        }
        set
        {
            this.m_ETag = value;
            SetETagValue(value);
        }
    }

    public string PartitionKey
    {
        get
        {
            if (m_PartitionKey == null)
            {
                m_PartitionKey = BuildPartitionKey();
            }

            return m_PartitionKey;
        }
        set
        {
            m_PartitionKey = value;
        }
    }

    public string RowKey
    {
        get
        {
            if (m_RowKey == null)
            {
                m_RowKey = BuildRowKey();
            }
            return m_RowKey;
        }
        set
        {
            m_RowKey = value;
        }
    }

    public DateTimeOffset Timestamp { get; set; }

    public T Value
    {
        get
        {
            return m_Value;
        }
    }
}

To use EntityAdapter with a DTO/POCO (e.g. Racer), you write an adapter (e.g. RacerAdapter):

public class Racer
{
    [Display(Name = "Driver")]
    public string Name { get; set; }

    [Display(Name = "Car Number")]
    public string CarNumber { get; set; }

    [Display(Name = "Race")]
    public string RaceName { get; set; }

    public DateTime? DateOfBirth { get; set; }

    [Display(Name = "Last Win")]
    public string LastWin { get; set; }

    public string ETag { get; set; }

    public bool HasWon
    {
        get
        {
            return !String.IsNullOrEmpty(this.LastWin);
        }
    }

    public List<string> Validate()
    {
        List<string> validationErrors = new List<string>();

        //TODO: Write validation logic

        return validationErrors;
    }
}

internal class RacerAdapter : EntityAdapter<Racer>
{
    public RacerAdapter()
    { }

    public RacerAdapter(Racer racer)
        : base(racer)
    {
        this.ETag = racer.ETag;
    }

    protected override string BuildPartitionKey()
    {
        return Value.RaceName;
    }

    protected override string BuildRowKey()
    {
        return Value.CarNumber;
    }

    protected override void ReadValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    {

        this.Value.RaceName = this.PartitionKey;
        this.Value.CarNumber = this.RowKey;
    }

    protected override void WriteValues(
        IDictionary<string, EntityProperty> properties,
        OperationContext operationContext)
    {
        properties.Remove("CarNumber");
        properties.Remove("RaceName");
    }

    protected override void SetETagValue(string eTag)
    {
        this.Value.ETag = eTag;
    }
}

Now we have everything we need to make our data access layer more simplified and domain-focused instead of table-entity-focused.

// Using TableEntity-derived class requires front-facing layers to deal with partition/row keys instead of domain-specific identifiers
public void AddRacer(RacerEntity racer)
{
    CloudTable table = GetRacerTable();

    TableOperation upsertOperation = TableOperation.InsertOrReplace(racer);
    table.Execute(upsertOperation);
}

// Using a DTO with the EntityAdapter
public void AddRacer(Racer racer)
{
    CloudTable table = GetRacerTable();

    var adapter = new RacerAdapter(racer);
    var upsertOperation = TableOperation.InsertOrReplace(adapter);

    table.Execute(upsertOperation);
}

With or without EntityAdapter, SCL 2.1 gave us TableEntity, TableOperation, etc. that really simplify our code. EntityAdapter is icing on the cake, and really helps to simplify Azure-hosted web APIs.

Feb 062014
 

If you’ve been operating an application as an Azure Cloud Service for a year or two, then you are probably due to renew or upgrade your SSL certificate. People move on, and contractor rates go up. You may not be the person that installed the original SSL cert or may not have documentation on how to install a new cert. The process is simple and takes only an hour once you’ve acquired your new certificate.

  1. Downloaded the new certificate from your certificate provider
    Each provider is different in how they will deliver the certificates. GoDaddy will have you select your server type (e.g. IIS6, IIS7, Tomcat, Apache) before downloading the certificate. When you requested the certificate from your provider, you had to use one of these servers to generate the CSR (certificate signing request). You will be receiving a CRT (web server certificate) from your provider. It’s important to choose the right server type so the CRT can be imported. If you’re deploying to Azure, then you’ll probably choose IIS7 like I did. Download the cert files (or zip file) and save it somewhere safe from prying eyes.

    NOTE: You will likely also receive some intermediate certificates. These have much longer lifespans than a 1-2 year SSL certificate. You’ll follow your provider’s instructions to install these later, if necessary.

  2. Complete the certificate request on IIS
    If you received intermediate certificates from your provider, now is the time to do so. This will ensure that you have a full certification path. Follow your provider’s instructions for this. These intermediate certificates have lifespans of up to 10-20 years, so if the thumbprint is the same no action will be necessary. You can check this by double-clicking the certificate and checking the thumbprint on the details tab. Compare that value to what was previously uploaded to Azure under Cloud Services – <Your Cloud Service> – Certificates tab. Any previously uploaded intermediate certificates will appear here as well as your existing SSL certificate.

    In IIS7 on your server, VM, developer workstation, click on Server Certificates. In the Actions pane on the right, click Complete Certificate Request. Browse to find the name of the CRT file you downloaded in the previous step. Type a friendly name like “*.my-domain.com” and click OK. If the import is successful, you’ll see your certificate appear in the list on the Server Certificates screen in IIS.
    image

  3. Export the certificate as a PFX file
    Open MMC and add the snap-in to work with the Local Machine certificate stores appearing as Certificates (Local Computer). Find your certificate in the Personal \ Certificates store and look for the friendly name you entered in the previous step. Right-click on the certificate, choose All Tasks and then Export to open the Certificate Export Wizard. Follow the wizard, choosing Yes, export the private key and Include all certificates in the certification path if possible options. Type a good password and choose a file path to export the PFX file. For future-you’s sake, name the file with a .pfx extension. When the wizard completes, your PFX file will be ready for use.

    Before you leave the certificate manager (MMC), double-click the certificate to open it and copy the thumbprint from the Details tab. You’ll need the thumbprint in later steps.

  4. Upload the PFX and intermediate certificates to Azure
    Now that you have both certificates, navigate to the uploaded to the Azure Management Portal, and click on Cloud Services. Find the desired cloud service in the list, and click on it to select it. Click on the Certificates tab, and then click the Upload button in the bottom menu. Browse to find your PFX file, and type the password. Click the OK button.
    image
  5. Change the service configuration to use the new thumbprint
    Open your application’s solution, and open the ServiceConfiguration.Cloud.cscfg file in the Azure hosting project. Find the existing SSL certificate under <Certificates>. Paste in your new thumbprint, making to sure it’s all uppercase with no spaces. If your thumbprintAlgorithm has changed, change that value in the config file as well.
  6. Deploy your app to Staging
    Now that your certificate is on Azure and your application has been updated, it’s time to deploy to staging, Once your deployed is complete and your staging environment status returns to “Running”, try out the Staging environment using the HTTPS version of the Site URL seen in the Azure Management Portal. Using Chrome, find the certificate information by clicking on the lock symbol in the address bar and then click the Connection tab and then Certificate Information.
    image
    It’s expected that it is complaining at this point because we aren’t using the intended domain name, staging uses <random>.cloudapp.net. Check that the end date, thumbprint, name, and other properties are what you expect to see.
  7. Swap VIPs
    Once you’re satisfied that the Staging environment is a good build and that the certificate is correctly assigned, swap the staging and production environments. When completed (< 30 seconds), check out your application using the HTTPS endpoint and domain name. You should see the lock in the address bar, and make sure to check the properties (e.g. expiration date) again.

That’s it. Party on!