Enterprise applications store their data in a relational database. Our code reads the data stored in tables with many complex joins and business rule laden queries. We take the results of those queries and construct an equally complex business entity that is used by our application logic. Most developers, myself excluded, hate working with the database. Writing, modifying, or even seeing T-SQL causes some developers to itch. LINQ to SQL serves as a partially effective Hydrocortisone to relieve the itch. But they still need to maintain the schema, write SQL-mindful LINQ queries, and deal with the constant DataContext updates.
Imagine a world where you no longer need to translate your complex business entities to and from relational tables. A world where there is no database backing store. A world where we create our business entities and store them in memory. Even better, in memory on a shared resource. Does it sound like an inconceivable futuristic developer heaven? Well it probably is, but this is really cool stuff in the works.
Enter the Microsoft project code-named “Velocity.” The blurb on the overview page reads:
“Velocity” is a distributed in-memory application cache platform for developing scalable, high-performance applications. “Velocity” can be used to cache any CLR object and provides access through simple APIs. The primary goals for “Velocity” are performance, scalability and availability.
I have been working with the Digipede Network, the leading grid computing software solution, for a few months. The Velocity architecture sounds remarkably similar to Digipede’s. I have seen the great benefits of the Digipede Network and have high expectations for Velocity.
The Digipede Network, for those of you that haven’t seen it yet, consists of a central Digipede Server and one or many Digipede Agents. The server receives client requests and assigns tasks to the agents. The client uses the Digipede API to communicate with the server. The API pretty much wraps client-to-server and server-to-client WSE2 web service calls. This architecture allows you to take almost any CPU-intensive process and spread the workload among tens or hundreds of commodity or server grade machines. The result is a very high performing and easily scaled system with few code changes from what you do today.
Digipede Network Diagram:
Digipede only works in this configuration, while Velocity has two proposed deployment models. You can have a “caching tier”, similar to Digipede’s Server and Agent configuration, or you can house Velocity as a Caching Service directly in IIS7. I don’t know how communications will be handled between the client API and the “caching tier”, but I assume it will be some sort of service calls (WCF perhaps). All CLR objects stored in the Velocity cache must be marked [Serializable] just as task worker classes must be to work with Digipede.
The Velocity API looks simple enough too. It exposes intuitive Get() and Put() methods where you call the cache by name. I can see how versioning of the cached objects might get tricky. Your application will also need a new configSection that specifies the deployment mode, locality, and also contains the list of cache hosts. As this is a distributed solution, the standard virtual machine playground doesn’t work too well to really test this out.
This looks promising, and I’ll be following the progress of the project closely.