Blog Archive

Home
ASP.Net

This is a continuation of the Javascript Primer series where we focus on topics essential for .Net or back-end developers or who are transitioning to Javascript and client side development.  Many of the topics covered in this series will be related to producing web apps running in the browser.  Some design patterns and practices are also application to NodeJS / server-side Javascript apps as well.

Keeping IT Together

One complaint that many have when embarking on applications of medium complexity is that organizing your code is difficult; indeed, even with frameworks you can still end up with giant god object that contains all data-bindings, algorithms, helpers methods.  If you take the attitude that Javascript is a second class, toy language where you simply store jQuery animation methods and AJAX calls, you quickly end up with files that can be 3-4000 lines long!  Maintaining that is beyond torture.

A common design pattern used to alleviate this issue is the Model View ViewModel (MVVM) pattern.  The goal of MVVM is to divide up the logic of your application into UX, business logic, and data-binding activities into discrete components.  KnockoutJS is a popular framework that supports this approach.  Addy Osmani’s “Design Patterns for Javascript” defines the primary components of MVVM, and it is great primer to read if you are unfamiliar with these terms.  For our purposes:

Model will contain the data elements, such as FirstName, LastName, etc.  Essentially this is a building block for your front-end that will “model” entities such as user, customer, project, etc.

ViewModel will contain the behavior for the UI.  This can be business logic, formatting of data, and bridge the gap between data coming from a server and what is useful for the user to interactive with.  The ViewModel will also use the Models.

View will be the html document.  This will also contain the directives for what to present to the user, and will interact with the ViewModel.

We Still Have A Problem

So even with the MVVM pattern at our disposal we still have to fight bloated ViewModels.  From here out we will use a sample project for our discussion where we want to maintain project information comprised of project name, project description, and project notes, team members and a simple timeline / calendar function.  Sounds straight forward enough.  On our UI we will have a tab for grouping of information, and as information changes on a tab, we want that information available in each subsequent tab.  Again, nothing too earth shattering here.  It should look something like this JSFiddle below.  (NOTE:  For the sake of this exercise all code has been placed in the Javascript section of our JSFiddle.  You should split your code into modules so they can easily managed).

Each tab on this screen has logical groupings of information.  For the sake of argument let’s say that we decide to implement this UI with just one ViewModel for all three tabs.  For databinding we will be using KnockoutJS,  Starting out we have just one ViewModel – ProjectViewModel – that handles all updates; more specifically, we can take advantage of the data-binding facilities here, so when a team member is added, we can detect this change and automatically execute and update to the calendar.

We have three tabs – “Project Info” is pretty simple, and it uses the computed observable teamSize to display the count of the project team. You note that any changes to “Project Name” will be reflected on the “Team” tab.   The tab Team allows you to add a team member.  To support updating the calendar with the team member start date we have create subscription to the projectTeam observable array.  The tab “Timeline” has a calendar that displays the team member start date, and this is updated each time we add a new team member.  Also, any changes to the number of team members will be reflected on the first tab, “Project Info”.

Indeed this works well, since we have the advantage of one ViewModel and can benefit from easily sharing one set of data between all three tabs, But we still have an issue should we start adding more functionality and increase the properties and functions in the ProjectViewModel.  Soon maintaining the ViewModel will become difficult, and testing discrete portions of functionality will rapidly become difficult.  We need to be able to isolate or reduce the impact that the alterations made to the ViewModels have on each other.  This means adhering to the principal of “loose coupling”.

So what does it mean for us to create a ViewModel per tab?  The biggest issue now will be communicating between each ViewModel, and as always, our mantra should be “Keep it simple.  Please.  Pretty please”. So how do we achieve communicating data updates without creating three ViewModels that are still highly dependent on the others?  Our solution is to implement a subscribe and publish message architecture.  While this sounds intimidating, this simply means we will create each ViewModel with a method – a “subscription” – that will wait for a message that has the data we need.  As a corollary, each ViewModel need a way to “publish” any changes to data that occurs.

Super Simple Message with PostalJS

PostalJS is a very simple messaging framework that facilitates sub / pub messaging in a unique way.  In particular, PostalJS allows us to set up a subscription that remains “ignorant” of how the message is generated.  There is no “pre-registration processes” to map out a route between ViewModels; on the contrary, your Javascript object simply declares that will update when a subscription arrives.  This also means that several ViewModels can subscribe to an event and the publisher doesn’t care.

Here is a brief sample taken from github:

var subscription = postal.subscribe({
        channel: "orders",
        topic: "item.add",
        callback: function(data, envelope) {
            // `data` is the data published by the publisher.
            // `envelope` is a wrapper around the data & contains
            // metadata about the message like the channel, topic,
            // timestamp and any other data which might have been
            // added by the sender.
        }
    });

postal.publish({
        channel: "orders",
        topic: "item.add",
        data: {
            sku: "AZDTF4346",
            qty: 21
        }
    });

Looks pretty cool.  And simple.  So for us, our ViewModels will remain ignorant of each other, and only are aware that someone, something has sent data to it.  Another nice by product is that should one of the ViewModels fail, the impact is greatly reduced.  So should the ViewModel with the calendar fail, the other tabs could still collect data and function.  Better yet, adding new features to a ViewModel is going to be easier to achieve since we can test in isolation, and if the new features do not interrupt our sub / pub messaging, we can feel confident that new changes can be rolled into production without bringing down the entire application. Let’s look at the results, then examine the changes to our app’s architecture.

A Brave New World

If you open the Javascript tab of the JSFiddle, you’ll see that we have broken up the ProjectViewModel into 3 ViewModels:  TeamViewModel, TimelineViewModel, and of course the remants of ProjectViewModel.  We’ll take the updates on at time, focusing on the feature in the UI and work back to the code.

Project Name As before, entering the project name in the “Project Info” tab updates the title in the “Team” tab.  Same as before, the update happens when the input box loses focus.  Now let’s focus on the code.  Open up the Javascript in the JSFiddle, and you’ll see some drastic changes.  In the ProjectViewModel we have created a Knockout trigger that publishes a message:

</pre>
<pre data-language="js">_projectName.subscribe(function(newValue){

        //    Now use PostalJS to push message out
        postal.publish({
            channel: "project",
            topic: "edit.projectNameChange",
            data: {
                projectName: _projectName()
            }
        });
    });

So what we are doing here telling Knockout “Anytime there is a change to the observable _projectName, fire off a function that publishes a piece of data with the new value of _projectName”.  That’s it.  ProjectViewModel has fulfilled its responsibility and told anyone who will listen that a change has taken place.  Now find the TeamViewModel object.  Here we have a subscription that waits for a message containing _projectName updates:

</pre>
<pre data-language="js">var projectNameChangeSubscription = postal.subscribe({
        channel: "project",
        topic: "edit.projectNameChange",
        callback: function(data, envelope) {
            _projectName(data.projectName);
        }
    });

We get the data, and merely update an observable on the TeamViewModel.  It should be apparent by now that neither ProjectViewModel nor TeamViewModel “know” about each.  ProjectViewModel says “_projectName update ready” and who can handle it, does so. TeamViewModel just grabs the update, uses it.  That’s it.  The messaging works by creating a “channel” – in this case “project” – further defines the message with a topic – “edit.projectNameChange”.  This means you can a single channel that support multiple topics and narrowly define your event messages.

ProjectTeam Update Play around with the “Team” tab and add some team members with different dates.  The operates just as before.  Under the hood, the code in TeamViewModel that handles the publishing is:

</pre>
<pre data-language="js">_projectTeam.subscribe(function(){
        //    this is how postaljs publishes events
        postal.publish({
            channel: "project",
            topic: "edit.teamUpdate",
            data: {
                projectTeam: ko.toJS(_projectTeam)
            }
        });

    });

As before, we are asking KnockoutJS to watch for changes to the _projectTeam array, and when a change occurs, we publish a different message.  The portion of code – topic: “edit.teamUpdate” distinguishes this event from the previous on.  We simply take our observableArray and convert it a plain Javascript script array and throw out to whoever wants it.  If you look in both ProjectViewModel and TimelineViewModel you’ll see subscriptions that handle this topic.  This brings us to another important aspect of messaging and postaljs’ strength:  should we want to add more ViewModels that need _projectTeam updates, we only need to subscribe to the topic “edit.teamUpdate”.  We could subscribe 50 more times if we wanted.  The publisher / source ViewModel does not need any alteration, and this achieves our goal of loose coupling.

What Are The Other Benefits?


Looking at our ViewModels, you may have noticed that we are exposing less in the return statements, and with each ViewModel handling less functionality they become simpler to read and maintain.  Imagine that the needed to support 20 different data entry input boxes – one monster ViewModel would have a huge return statement.  It makes it hard to keep things straight in one “god” class.

With PostalJS and event messaging we no longer need to expose an entry point in our objects for data to be passed.  The subscriptions allow the ViewModel to handle receiving update messages internally.  A quite common scenario would be responding to an AJAX update.  With PostalJS you can avoid calls to each ViewModel when new data arrives from the server.  Simply publish to the appropriate topic and let the ViewModels respond as they need.

For those of you who may not know, DataTables.Net is a fantastic jQuery plug-in that creates a data grid presentation and offers support for filtering and server-side paging solutions. Yep, define an endpoint web service, pump down your data and you are good to go. But the cool kids these days are into the No-SQL thing, and one of the great entries into the document based database arena is RavenDB. With Raven, you define your domain objects and just store them in the database, as Raven will do it’s magic and persist your objects as documents. Have a List<Customer> to store, give it to Raven and it will create one document of the customers and store it in JSON format.

This post will show you how to combine the front end goodness of DataTables with the back end magic of RavenDB. The goal is to provide:

  • The ability to define a single class for that indexes data.
  • Control how what data is selected by define the columns, sort order and paging size in Javascript. In other words, DataTables will tell the server what it wants to pull back
  • Provide support for filtering properties with a single search field, ala Google style.
  • Above all, save you time, make you a hero in front of your fans. :)

For the Impatient, Here’s the End Product

There’s a lot ground that we’ll cover but for those who want to see the light at the end of the tunnel here what the end solution will look like. You may want to download the solution first so you can follow along in the code. First, your web service or controller will have the following:


[HttpPost]
public JsonResult GetTenants(string jsonAOData)
{
   var tenantPager = new DataTablesPager<Tenant, Tenant_Search>(DocumentStore);
   var results = tenantPager.PrepAOData(jsonAOData)
                 .FilterFormattedList();

   return Json(results);
}

// The core method that is used to get the data is in DataTablesPager.cs
public List Filter(int pageSize, int pageIndex, string[] terms)
{
   var targetList = new List();
   RavenQueryStatistics stats;

   using(var session = docStore.OpenSession())
   {
      if (terms[0].Length > 0)
      {
         targetList = session.Query<DataTablesReduceResult, TIndexCreator>()
                               .Customize(x => x.WaitForNonStaleResults())
                               .SearchMultiple(x => x.QueryFields, string.Join(" ", terms), 
                                                options: SearchOptions.And)
                               .Statistics(out stats)
                               .Skip(pageIndex)
                               .Take(pageSize)
                               .As()
                               .ToList();

         this.totalDisplayResults = stats.TotalResults;

         session.Query<DataTablesReduceResult, TIndexCreator>()
                 .Statistics(out stats)
                 .As()
                 .ToList();
         this.totalResults = stats.TotalResults;
}
// Code reduced for reading purposes.

Take note of the Generics on the constructor – Tenant is your domain object, Tenant_Search is the class that Raven will use to create the index for retrieving the data, as well as defining what properties you can filter on the object. We’ll cover indexing shortly along with some RavenDB background.

Your Javascript will be the following:



var otable;

$(document).ready(function(){
   otable = $("#tenantTable").dataTable({
               "bProcessing": true,
               "bSort": true,
               "sPaginationType": "full_numbers",
               "aoColumnDefs": [
               { "sName": "Name", "aTargets": [0], "bSortable": true, "bSearchable": true },
               { "sName": "Agent", "aTargets": [1], "bSortable": true, "bSearchable": true },
               { "sName": "Center", "aTargets": [2], "bSortable": true, "bSearchable": true },
               { "sName": "StartDate", "aTargets": [3], "bSortable": true, "bSearchable": true },
               { "sName": "EndDate", "aTargets": [4], "bSortable": true, "bSearchable": true },
               { "sName": "DealAmount", "aTargets": [4], "bSortable": true, "bSearchable": true }
               ],
               "oLanguage": {
               "sSearch": "Search all columns:"
            },
               "aaSorting": [[1, "asc"]],
               "iDisplayLength": 7,
               "bServerSide": true,
               "sAjaxSource": "GetTenants",
               "fnServerData": function (sSource, aoData, fnCallback) {

                      var jsonAOData = JSON.stringify(aoData);

                     $.ajax({
                       //dataType: 'json',
                       contentType: "application/json; charset=utf-8",
                       type: "POST",
                       url: sSource,
                       data: "{jsonAOData : '" + jsonAOData + "'}",
                       success: function (msg) {
                         fnCallback(msg);
                       },
                       error: function (XMLHttpRequest, textStatus, errorThrown) {
                         alert(XMLHttpRequest.status);
                         alert(XMLHttpRequest.responseText);

                       }
                     });
         }
   });

   otable.fnSetFilteringDelay(1000);
});

It’s actually longer than the .Net stuff!!! We’ll cover what this means as well.

Getting Data and Providing Search with RavenDB

This post assumes that you have installed RavenDB, can connect to it, know how to store your objects, and that you can perform queries with LINQ. There are some good tutorials such as Rob Ashton’s video introduction to Raven, as well as a brief overview by Sensei (me). We’re going to focus on Raven’s inherent full-text search capability and rely on Raven’s built in paging mechanism to help us achieve our goals. While there is great capability that Raven provides, it is not SQL, and much of what you know about LINQ and LINQ to SQL will help you as well as paint you into a corner at the same. We’ll cover those aspects too.

First off, RavenDB is built on top of the search engine Lucene.Net. It is a schema-less database so up front we will need to identify what how we want to fetch data, as they indexes provide super fast data retrieval. Raven Indexes reduce the need to devote huge cpu cycles to processing a query, as the index is built from the documents and processed as a background operation. This operation is asynchronous and is performed by Lucene. Without this approach, any query force a complete scan of all documents. A miserably slow scan. With indexes define up front, Raven will work quitely to keep the indexes up to date when new documents are created. So why is this important? Well, what you think are doing in LINQ:


var search = "Bonus";
var steps = session.Query()
                    .Customize(x => x.WaitForNonStaleResults())
                    .Where(x => x.State.StartsWith(search) || x.WorkflowName.StartsWith(search))
                    .ToList();

 

is really translated to Lucene syntax. So what we get in the end is State:Bonus OR WorkflowName:Bonus. While it is simple to write a query that includes all the properties of an object, if you had an object with 15 properties would you really want to create a monster statement with a ton of ||’s? Hell no! If you look in the TestSuite project of the source code there is a few example of using pure LINQ queries. Check out the method “CanFilterAccrossAllStringProperties” and you will see where things were headed.

We want to be like Fonzie, and what was Fonzie? Correctomundo – he’s cool. A good solution would be to know what properties a domain object had, and perform a filter against those properties. In other words, it would really helpful if we could write some code that would look like this:


var propertyFilterSteps = session.Query()
                                 .Customize(x => x.WaitForNonStaleResults())
                                 .Where(AllStringPropertiesFilter(search,
                                      "Answer,AnsweredBy,
                                      Id,Participants,WorkflowName,WorkflowType,"))
                                 .ToList();

Here we are using a Expression<Func<T>> to pass in a delimited list of property names and with a little LINQ query against the class Step, we can generate a Lambda to process of filter. This is in the test method “CanFilterAccrossAllStringProperties”. It worked great until we needed to include DateTime properties. The code is included in the project, so you look at it there.

So how do we achieve the goal of have a single text box driven search that will query across all type of properties, and when you type “Spock 2010″ will query the properties that you specify for both values of “Spock” and “2010″? Let’s go back to Raven as you can specify an index query by mapping what properties you want to be included in the index and how you want Raven / Lucene to parse the text for matching values in that index. Raven provides a class called “AbstractIndexCreationTask” where you define a Map / Reduce function to create your index. In other words you are picking which properties are included in the index. You can define the output of the Map to anything that you wish. This is held in a class that we’ll name ReduceResult and we will query against the properties of that class. We want to tell Raven to take the significant properties and index them in a format that we can match our terms against. So we will create the following index that will allows us to filter for any term. This is found in Step_Search.cs in the Index folder


public class Step_Search : AbstractIndexCreationTask<Step, Step_Search.ReduceResult>
{
	public class ReduceResult
	{
		public string[] QueryFields { get; set; }

	// ... code eliminated for reading purpose
	}

	public Step_Search()
	{
		Map = steps =>
		from step in steps
		select new
		{
			QueryFields = new [] { step.State, step.Answer, step.AnsweredBy, step.WorkflowName,
			step.Created.ToShortDateString(), step.Created.Year.ToString(),
			step.Created.Month.ToString() + "/" + step.Created.Year.ToString()},
			DateCreated = step.Created,
			WorkflowName = step.WorkflowName,
			State = step.State
		};

	Indexes.Add(x => x.QueryFields, FieldIndexing.Analyzed);

// ... more code eliminated for reading purposes

So what we have done is create an index that has an array of strings. This array holds the text of our properties that we will match against. Raven has a method called Search that will perform a “StartsWith” style match against each object in the array. The call is .Search(x => x.QueryFields, “string to be searched”). If you take a look at the index we have done some additional things with dates: for one, we create a string representation in ShortDate format. So when the user knows the exact date they can enter it and the Pager will match it. But we want to make things as easy possible, so we have created string representations in mm/yyyy format so it’s easy for the users to filter if they only know the month and year of the item they are looking for. “I think it was April last year …”. This is a big for those users who don’t recall exact details, as it allows them to quickly discover what they are looking for.

One last thing before we move on to making this work with DataTables. Raven provides the search method that works with the IRavenQueryable collection. Take a look at the DataTablesPager.Filter method and you will see a SearchMultiple method. This is put in place to perform a search for multiple terms. In other words it will search for “Spock” and then chain a search for “2010″ against the IRavenQueryable. Phillip Haydon came up with this approach that works with partial matches, as it will give Lucene the right syntax. Otherwise you end up with weird results because you feed Lucene “spoc 201″ and because of the tokens Lucene creates with the text analyzer it will not pick up what you need. Phillip’s brilliant approach bridges this gap by using an extension method to perform the chaining of search terms. This is found in the class RavenExtensionMethods.cs, and it basically tokenizes the search string, creates an array and an individual call to the Search() method for member of the array. It allows us to perform advanced filtering such as partial matches like “spoc 201″. Try this out in the Tenant.aspx page of the WebDemo solution are you’ll see how it works.

Outta Breath Yet? Let’s Talk DataTables.Net!!!

Breathing hard yet? Good! There’s more to do – how does this work with DataTables.Net? DataTables uses the following parameters when processing server-side data:

Sent to the server:

Type Name Info
int iDisplayStart Display start point
int iDisplayLength Number of records to display
int iColumns Number of columns being displayed (useful for getting individual column search info)
string sSearch Global search field
boolean bEscapeRegex Global search is regex or not
boolean bSortable_(int) Indicator for if a column is flagged as sortable or not on the client-side
boolean bSearchable_(int) Indicator for if a column is flagged as searchable or not on the client-side
string sSearch_(int) Individual column filter
boolean bEscapeRegex_(int) Individual column filter is regex or not
int iSortingCols Number of columns to sort on
int iSortCol_(int) Column being sorted on (you will need to decode this number for your database)
string sSortDir_(int) Direction to be sorted – “desc” or “asc”. Note that the prefix for this variable is wrong in 1.5.x where iSortDir_(int) was used)
string sEcho Information for DataTables to use for rendering

Reply from the server

In reply to each request for information that DataTables makes to the server, it expects to get a well formed JSON object with the following parameters.

Type Name Info
int iTotalRecords Total records, before filtering (i.e. the total number of records in the database)
int iTotalDisplayRecords Total records, after filtering (i.e. the total number of records after filtering has been applied – not just the number of records being returned in this result set)
string sEcho An unaltered copy of sEcho sent from the client side. This parameter will change with each draw (it is basically a draw count) – so it is important that this is implemented. Note that it strongly recommended for security reasons that you ‘cast’ this parameter to an integer in order to prevent Cross Site Scripting (XSS) attacks.
string sColumns Optional – this is a string of column names, comma separated (used in combination with sName) which will allow DataTables to reorder data on the client-side if required for display
array array mixed aaData The data in a 2D array

DataTables will POST an AOData object. The class DataTablesPager.cs will handle parsing this object with the method PrepAOData. It’s responsible for determining what properties we are querying, how the data will be sorted, paging size, as well as including any terms for filtering. Because we have used generics, PrepAOData doesn’t care what object in your domain you are using as it is designed to read the properties and match those properties against the list of data items that DataTables has sent up to our application on the server. Again, our goal is to let DataTables dictate what to look for, and as long as we did our work when we created the index we should have great flexibility.

Let’s look at the Javascript again:

"aoColumnDefs": [
{ "sName": "Name", "aTargets": [0], "bSortable": true, "bSearchable": true },
{ "sName": "Agent", "aTargets": [1], "bSortable": true, "bSearchable": true },
{ "sName": "Center", "aTargets": [2], "bSortable": true, "bSearchable": true },
{ "sName": "StartDate", "aTargets": [3], "bSortable": true, "bSearchable": true },
{ "sName": "EndDate", "aTargets": [4], "bSortable": true, "bSearchable": true },
{ "sName": "DealAmount", "aTargets": [4], "bSortable": true, "bSearchable": true }
],

In the server side application we have Tenant_Search.cs that has created an index with the properties Name, Agent, Center, StartDate, EndDate and DealAmount. The Javascript above is DataTables way of saying “Hey, server, give me information back in the form of an array of value pairs and by they way, here are the data columns to use when you get me my stuff.” On the server side, we don’t care what the order of columns in the grid will be as the server assumes that DataTables will take care of that. And indeed, DataTables is supplying the request in the sName value pair. The server fetches it, spits it back to the browser and DataTables munches it. You can change the Javascript and leave your server application alone as long as you stick to using the fields you included in your index. Just like Fonzie, be cool.

But even cooler is the fact that Raven will handle paging for us: it has a built in limit of return up to 128 documents at a slice. Given that it’s retrieval speed is very fast this will work very well for us. If you look at the Raven console for each page that you retrieve you will see a very low time for the fetches. Remember, there is very little processing for the query as it is the index that has already performed the heavy lifting. For an example of this the page Tenants.aspx in WebDemo solution will page and filter 13,000 + documents. It is lightning fast.

Has Your Head Exploded Yet?

This is a lot to digest. Source code is here, along with the means to create 13,000 documents that you can use for testing. Please note that you will be required to pull down the Raven assemblies/packages via NuGet. Otherwise the download would be about 36 MB. Work for responding to sorting request has been started and hopefully you’ll want to see how that’s is solved in future post. But what we have accomplished is a very robust and easy way to display data from our document database, with low effort on the back end application.

Play with the code. The only way we make this better is to critique constructively, adapt to better ideas and grow! Some of the failed experiments have been included in the test so you can see how things have progressed. They marked as failures so you can focus on testing the DataTablesPager class. These failures are interesting though, and can provide insight to how the solution was arrived at. Also, the first time you fire up the web site that Global.ascx page will look for the test records and create them. This takes some time so if you want the wait those sections are marked for you so you comment them out and do what you need to. Enjoy.

Some gifts just keep on giving, and many times things can just take on a momentum that grow beyond your expectation. Bob Sherwood wrote to Sensei and pointed out that DataTables.net supports multiple column sorting. All you do is hold down the shift key and click on any second or third column and DataTables will add that column to sort criteria. “Well, how come it doesn’t work with the server side solution?” Talk about the sound of one hand clapping. How about that for a flub! Sensei didn’t think of that! Then panic set in – would this introduce new complexity to the DataTablePager solution, making it too difficult to maintain a clean implementation? After some long thought it seemed that a solution could be neatly added. Before reading, you should download the latest code to follow along.

How DataTables.Net Communicates Which Columns Are Involved in a Sort

If you recall, DataTables.Net uses a structure called aoData to communicate to the server what columns are needed, the page size, and whether a column is a data element or a client side custom column. We covered that in the last DataTablePager post. aoData also has a convention for sorting:

bSortColumn_X=ColumnPosition

In our example we are working with the following columns:

,Name,Agent,Center,,CenterId,DealAmount

where column 0 is a custom client side column, column 1 is Name (a mere data column), column 2 is Center (another data column), column 3 is a custom client side column, and the remaining columns are just data columns.

If we are sorting just by Name, then aoData will contain the following:

bSortColumn_0=1

When we wish to sort by Center, then by Name we get the following in aoData”

bSortColumn_0=2

bSortColumn_1=1

In other words, the first column we want to sort by is in position 2 (Center) and the second column(Name) is in position 1. We’ll want to record this some where so that we can pass this to our order routine. aoData passes all column information to us on the server, but we’ll have to parse through the columns and check to see if one or many of the columns is actually involved in a sort request and as we do we’ll need to preserve the order of that column of data in the sort.

SearchAndSortable Class to the Rescue

You’ll recall that we have a class called SearchAndSortable that defines how the column is used by the client. Since we iterate over all the columns in aoData it makes sense that we should take this opportunity to see if any column is involved in a sort and store that information in SearchAndSortable as well. The new code for the class looks like this:


public class SearchAndSortable
{
	public string Name { get; set; }
	public int ColumnIndex { get; set; }
	public bool IsSearchable { get; set; }
	public bool IsSortable { get; set; }
	public PropertyInfo Property{ get; set; }
	public int SortOrder { get; set; }
	public bool IsCurrentlySorted { get; set; }
	public string SortDirection { get; set; }

	public SearchAndSortable(string name, int columnIndex, bool isSearchable,
								bool isSortable)
	{
		this.Name = name;
		this.ColumnIndex = columnIndex;
		this.IsSearchable = isSearchable;
		this.IsSortable = IsSortable;
	}

	public SearchAndSortable() : this(string.Empty, 0, true, true) { }
}


There are 3 new additions:

IsCurrentlySorted - is this column included in the sort request.

SortDirection - “asc” or “desc” for ascending and descending.

SortOrder - the order of the column in the sort request. Is it the first or second column in a multicolumn sort.

As we walk through the column definitions, we’ll look to see if each column is involved in a sort and record what direction – ascending or descending – is required. From our previous post you’ll remember that the method PrepAOData is where we parse our column definitions. Here is the new code:


// Sort columns
this.sortKeyPrefix = aoDataList.Where(x => x.Name.StartsWith(INDIVIDUAL_SORT_KEY_PREFIX))
								.Select(x => x.Value)
								.ToList();

// Column list
var cols = aoDataList.Where(x => x.Name == "sColumns"
							& string.IsNullOrEmpty(x.Value) == false)
						.SingleOrDefault();

if(cols == null)
{
	this.columns = new List();
}
else
{
	this.columns = cols.Value
	.Split(',')
	.ToList();
}

// What column is searchable and / or sortable
// What properties from T is identified by the columns
var properties = typeof(T).GetProperties();
int i = 0;

// Search and store all properties from T
this.columns.ForEach(col =>
{
	if (string.IsNullOrEmpty(col) == false)
	{
		var searchable = new SearchAndSortable(col, i, false, false);
		var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE + i.ToString())
									.ToList();
		searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;
		searchable.Property = properties.Where(x => x.Name == col)
										.SingleOrDefault();

		searchAndSortables.Add(searchable);
	}

	i++;
});

// Sort
searchAndSortables.ForEach(sortable => {
							var sort = aoDataList.Where(x => x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
				.ToList();
sortable.IsSortable = (sort[0].Value == "False") ? false : true;
sortable.SortOrder = -1;

// Is this item amongst currently sorted columns?
int order = 0;
this.sortKeyPrefix.ForEach(keyPrefix => {
	if (sortable.ColumnIndex == Convert.ToInt32(keyPrefix))
	{
		sortable.IsCurrentlySorted = true;

		// Is this the primary sort column or secondary?
		sortable.SortOrder = order;

		// Ascending or Descending?
		var ascDesc = aoDataList.Where(x => x.Name == "sSortDir_" + order)
								.SingleOrDefault();
		if(ascDesc != null)
		{
			sortable.SortDirection = ascDesc.Value;
		}
	}

	order++;
	});
});

To sum up, we’ll traverse all of the columns listed in sColumns. For each column we’ll grab the PorpertyInfo from our underlying object of type T. This gives only those properties that will be displayed in the grid on the client. If the column is marked as searchable, we indicate that by setting the IsSearchable property on the SearchAndSortable class. This happens starting at line 28 through 43.

Next we need to determine what we can sort, and will traverse the new list of SearchAndSortables we created. DataTables will tell us what if the column can be sorted by with following convention:

bSortable_ColNumber = True

So if the column Center were to be “sortable” aoData would contain:

bSortable_1 = True

We record the sortable state as shown on line 49 in the code listing.

Now that we know whether we can sort on this column, we have to look through the sort request and see if the column is actually involved in a sort. We do that by looking at what DataTables.Net sent to us from the client. Again the convention is to send bSortColumn_0=1 to indicate that the first column for the sort in the second item listed in sColumns property. aoData will contain many bSortColum’s so we’ll walk through each one and record the order that column should take in the sort. That occurs at line 55 where we match the column index with the bSortColumn_x value.

We’ll also determine what the sort direction – ascending or descending – should be. At line 63 we get the direction of the sort and record this value in the SearchAndSortable.

When the method PrepAOData is completed, we have a complete map of all columns and what columns are being sorted, as well as their respective sort direction. All of this was sent to us from the client and we are storing this configuration for later use.

Performing the Sort

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="204" height="40" flashvars="hostname=cowbell.grooveshark.com&widgetID=23379337&style=water&p=0" allowScriptAccess="always" wmode="window" ](Home stretch so play the song!!)

If you can picture what we have so far we just basically created a collection of column names, their respective PropertyInfo’s and have recorded which of these properties are involved in a sort. At this stage we should be able to query this collection and get back those properties and the order that the sort applies.

You may already be aware that you can have a compound sort statement in LINQ with the following statement:


var sortedCustomers = customer.OrderBy(x => x.LastName)
                              .ThenBy(x => x.FirstName);

The trick is to run through all the properties and create that compound statement. Remember when we recorded the position of the sort as an integer? This makes it easy for us to sort out the messy scenarios where the second column is the first column of a sort. SearchAndSortable.SortOrder takes care of this for us. Just get the data order by SortOrder in descending order and you’re good to go. So that code would look like the following:


var sorted = this.searchAndSortables.Where(x => x.IsCurrentlySorted == true)
.OrderBy(x => x.SortOrder)
.ToList();

sorted.ForEach(sort => {
    records = records.OrderBy(sort.Name, sort.SortDirection,
                         (sort.SortOrder == 0) ? true : false);
});

On line 6 in the code above we are calling our extension method OrderBy in Extensions.cs. We pass the property name, the sort direction, and whether this is the first column of the sort. This last piece is important as it will create either “OrderBy” or the “ThenBy” for us. When it’s the first column, you guessed it we get “OrderBy”. Sensei found this magic on a StackOverflow post by Marc Gravell and others.

Here is the entire method ApplySort from DataTablePager.cs, and note how we still check for the initial display of the data grid and default to the first column that is sortable.


private IQueryable ApplySort(IQueryable records)
{
	var sorted = this.searchAndSortables.Where(x => x.IsCurrentlySorted == true)
										.OrderBy(x => x.SortOrder)
										.ToList();

	// Are we at initialization of grid with no column selected?
	if (sorted.Count == 0)
	{
		string firstSortColumn = this.sortKeyPrefix.First();
		int firstColumn = int.Parse(firstSortColumn);

		string sortDirection = "asc";
		sortDirection = this.aoDataList.Where(x => x.Name == INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX + "0")
										.Single()
										.Value
										.ToLower();

		if (string.IsNullOrEmpty(sortDirection))
		{
		sortDirection = "asc";
		}

		// Initial display will set order to first column - column 0
		// When column 0 is not sortable, find first column that is
		var sortable = this.searchAndSortables.Where(x => x.ColumnIndex == firstColumn)
											.SingleOrDefault();
		if (sortable == null)
		{
			sortable = this.searchAndSortables.First(x => x.IsSortable);
		}

		return records.OrderBy(sortable.Name, sortDirection, true);
	}
	else
	{
		// Traverse all columns selected for sort
		sorted.ForEach(sort => {
		records = records.OrderBy(sort.Name, sort.SortDirection,
		(sort.SortOrder == 0) ? true : false);
		});

		return records;
	}
}

It’s All in the Setup

Test it out. Hold down the shift key and select a second column and WHAMO – multiple column sorts! Hold down the shift key and click the same column twice and KAH-BLAMO multiple column sort with descending order on the second column!!!

The really cool thing is that our process on the server is being directed by DataTables.net on the client. And even awseomer is that you have zero configuration on the server. Most awesome-est is that this will work with all of your domain objects, because we have used generics we can apply this to any class in our domain. So what are you doing to do with all that time you just got back?

ImageOn the quest to provide a rich user interface experience on his current project, Sensei has been experimenting with KnockoutJS by Steve Sanderson.  If you haven’t reviewed it’s capabilities yet  it would be well worth your while.  Not only has Steve put together a great series of tutorials, but he has been dog fooding it with Knockout.  The entire documentation and tutorial set is completed used Knockout.  Another fine source is Knockmeout.net by Ryan Niemeyer.  Ryan is extremely active on StackOverflow answering questions regarding Knockout, and also has a fine blog that offers very important insight on developing with this framework.

KnockoutJS is a great way to re-organize your client side code.  The goal of  this post is not to teach you KnocoutJS; rather, Sensei wants to point out other benefits – and a few pitfalls – to adopting its use.  In years past, it’s been difficult to avoid writing spaghetti code in Javascript.  Knockout forces you to adopt a new pattern of thought for organizing your UI implementation.  The result is a more maintainable code base.  In the past you may have written code similar to what Sensei use to write.  Take for example assigning a click event to a button or href in order to remove a record from a table:

&lt;table&gt;
  &lt;thead&gt;&lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(1); return false;&quot; href=&quot;#&quot;&gt;Customer One&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;1313 Galaxy Way&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(2); return false;&quot; href=&quot;#&quot;&gt;Customer Two&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;27 Mockingbird Lane&lt;/td&gt;
    &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;script type=&quot;text/javascript&quot;&gt;
function deleteRecord(id){
  //  Do some delete activities ...
}
&lt;/script&gt;

You might even went as far as to assign the onclick event like so:

$(document).ready(function(){
  $(&quot;tr a&quot;).on('click', function(){
    //  find the customer id and call the delete record
  });
});

The proposition offered by Knockout is much different.  Many others much more conversant in design patterns and development than Sensei can offer better technical reasons why you sound use Knockout.  Sensei likes the fact that it makes thinking about your code much simpler.  As in:

 
<td><a data-bind="click: deleteRecord($data)" href="#">Customer One</a></td>

Yep, you have code mixed in with your mark up, but so what.  You can hunt down what’s going on, switch to your external js file to review what deleteRecord is supposed to do.  It’s as simple as that.  Speaking of js files, Knockout forces you to have a more disciplined approach organizing your javascript.  Here is what the supporting javascript could look like:

var CustomerRecord = function(id, name){
  //  The items you want to appear in UI are wrapped with ko.observable
  this.id = ko.observable(id);
  this.name = ko.observable(name);
}

var ViewModel = function(){
var self = this;
  //  For our demo let's create two customer records.  Normally you'll get Json from the server
  self.customers = ko.observableArray([
    new CustomerRecord(1, &quot;Vandelay Industries&quot;),
    new CustomerRecord(2, &quot;Wiley Acme Associates&quot;)
  ]);

  self.deleteRecord = function(data){
    //  Simply remove the item that matches data from the array
    self.customers.remove(data);
  }
}

var vm = new ViewModel();
ko.applyBindings(vm);

That’s it.  Include this file with your markup and that’s all you have to do.  The html will change too.   Knockout will allow you to produce our table by employing the following syntax:

<tbody data-bind=”foreach: customers”>
<tr>
<td><a href=”#” data-bind=”click:  deleteRecord($data)”><span data-bind=”text: id”></span></a></td>
<td><span data-bind=’text: name”></span></td>
<tr>
</tbody>

These Aren’t the Voids You’re Looking For

So we’re all touchy feely because we have organization to our Javascript and that’s a good thing.  Here’s some distressing news – while Knockout is a great framework, getting the hang of it can be really hard.  Part of the reason is Javascript itself.  Because it’s a scripting language, you end up with strange scenarios where you have a property that appear to have the same name but different values.  You see, one of the first rules of using Knockout is that observables ARE METHODS.  You have to access them with (), as in customer.name(), and not customer.name.  In other words, in order for you to assign values to an observable you must:


customer.name(&quot;Vandelay Industries&quot;);

//  Don't do this - you create another property!!

customer.name = &quot;Vandelay Industries&quot;;

What? Actually, as you probably have surmised, you get .name() and .name, and this causes great confusion when you are debugging your application in Firebug.  Imagine you can see that customer.name has a value when you hit a breakpoint, but its not what you’re looking for.  Sensei developed a tactic to help verify that he’s not insane, and it works simply.  When in doubt, go the console in Firebug and access your observable via the ViewModel; so in our case you could issue:

vm.customer.name();

When name() doesn’t match your expectation you’ve most likely added a property with a typo.  Check with

vm.customer.name;

It sounds silly, but you can easily spend a half hour insisting that you’re doing the right thing, but you really confusing a property with a method.  Furthermore, observable arrays can also be a source of frustration:

// This is not the length of the observable array. It will always be zero!!!
vm.customers.length == 0;

// You get the length with this syntax
vm.customers().length;

Knock ‘em inta tamarra, Rocky

Had Sensei known the two tips before starting he would have save a lot of time.  There are many others, and they are best described by Ryan Niemeyer in his post 10 things to know about Knockout from day one.  Read this post slowly.  It will save you a lot of headache.  You may familiar with jQuery and Javascript, but Knockout introduces subtle differences that will catch you off guard.  That’s not a bad thing, it’s just different than what you may be used to.  Ryan also makes great use of JS Fiddle and answers most of his StackOverflow questions by using examples.  Those examples are in many cases easier to learn from than the tutorial since the scope is narrower than the instruction that Steve Sanderson gives.  It really allows you play along as you learn.

ImageOn the quest to provide a rich user interface experience on his current project, Sensei has been experimenting with KnockoutJS by Steve Sanderson.  If you haven’t reviewed it’s capabilities yet  it would be well worth your while.  Not only has Steve put together a great series of tutorials, but he has been dog fooding it with Knockout.  The entire documentation and tutorial set is completed used Knockout.  Another fine source is Knockmeout.net by Ryan Niemeyer.  Ryan is extremely active on StackOverflow answering questions regarding Knockout, and also has a fine blog that offers very important insight on developing with this framework.

KnockoutJS is a great way to re-organize your client side code.  The goal of  this post is not to teach you KnocoutJS; rather, Sensei wants to point out other benefits – and a few pitfalls – to adopting its use.  In years past, it’s been difficult to avoid writing spaghetti code in Javascript.  Knockout forces you to adopt a new pattern of thought for organizing your UI implementation.  The result is a more maintainable code base.  In the past you may have written code similar to what Sensei use to write.  Take for example assigning a click event to a button or href in order to remove a record from a table:

&lt;table&gt;
  &lt;thead&gt;&lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(1); return false;&quot; href=&quot;#&quot;&gt;Customer One&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;1313 Galaxy Way&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(2); return false;&quot; href=&quot;#&quot;&gt;Customer Two&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;27 Mockingbird Lane&lt;/td&gt;
    &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;script type=&quot;text/javascript&quot;&gt;
function deleteRecord(id){
  //  Do some delete activities ...
}
&lt;/script&gt;

You might even went as far as to assign the onclick event like so:

$(document).ready(function(){
  $(&quot;tr a&quot;).on('click', function(){
    //  find the customer id and call the delete record
  });
});

The proposition offered by Knockout is much different.  Many others much more conversant in design patterns and development than Sensei can offer better technical reasons why you sound use Knockout.  Sensei likes the fact that it makes thinking about your code much simpler.  As in:

 
<td><a data-bind="click: deleteRecord($data)" href="#">Customer One</a></td>

Yep, you have code mixed in with your mark up, but so what.  You can hunt down what’s going on, switch to your external js file to review what deleteRecord is supposed to do.  It’s as simple as that.  Speaking of js files, Knockout forces you to have a more disciplined approach organizing your javascript.  Here is what the supporting javascript could look like:

var CustomerRecord = function(id, name){
  //  The items you want to appear in UI are wrapped with ko.observable
  this.id = ko.observable(id);
  this.name = ko.observable(name);
}

var ViewModel = function(){
var self = this;
  //  For our demo let's create two customer records.  Normally you'll get Json from the server
  self.customers = ko.observableArray([
    new CustomerRecord(1, &quot;Vandelay Industries&quot;),
    new CustomerRecord(2, &quot;Wiley Acme Associates&quot;)
  ]);

  self.deleteRecord = function(data){
    //  Simply remove the item that matches data from the array
    self.customers.remove(data);
  }
}

var vm = new ViewModel();
ko.applyBindings(vm);

That’s it.  Include this file with your markup and that’s all you have to do.  The html will change too.   Knockout will allow you to produce our table by employing the following syntax:

<tbody data-bind=”foreach: customers”>
<tr>
<td><a href=”#” data-bind=”click:  deleteRecord($data)”><span data-bind=”text: id”></span></a></td>
<td><span data-bind=’text: name”></span></td>
<tr>
</tbody>

These Aren’t the Voids You’re Looking For

So we’re all touchy feely because we have organization to our Javascript and that’s a good thing.  Here’s some distressing news – while Knockout is a great framework, getting the hang of it can be really hard.  Part of the reason is Javascript itself.  Because it’s a scripting language, you end up with strange scenarios where you have a property that appear to have the same name but different values.  You see, one of the first rules of using Knockout is that observables ARE METHODS.  You have to access them with (), as in customer.name(), and not customer.name.  In other words, in order for you to assign values to an observable you must:


customer.name(&quot;Vandelay Industries&quot;);

//  Don't do this - you create another property!!

customer.name = &quot;Vandelay Industries&quot;;

What? Actually, as you probably have surmised, you get .name() and .name, and this causes great confusion when you are debugging your application in Firebug.  Imagine you can see that customer.name has a value when you hit a breakpoint, but its not what you’re looking for.  Sensei developed a tactic to help verify that he’s not insane, and it works simply.  When in doubt, go the console in Firebug and access your observable via the ViewModel; so in our case you could issue:

vm.customer.name();

When name() doesn’t match your expectation you’ve most likely added a property with a typo.  Check with

vm.customer.name;

It sounds silly, but you can easily spend a half hour insisting that you’re doing the right thing, but you really confusing a property with a method.  Furthermore, observable arrays can also be a source of frustration:

// This is not the length of the observable array. It will always be zero!!!
vm.customers.length == 0;

// You get the length with this syntax
vm.customers().length;

Knock ‘em inta tamarra, Rocky

Had Sensei known the two tips before starting he would have save a lot of time.  There are many others, and they are best described by Ryan Niemeyer in his post 10 things to know about Knockout from day one.  Read this post slowly.  It will save you a lot of headache.  You may familiar with jQuery and Javascript, but Knockout introduces subtle differences that will catch you off guard.  That’s not a bad thing, it’s just different than what you may be used to.  Ryan also makes great use of JS Fiddle and answers most of his StackOverflow questions by using examples.  Those examples are in many cases easier to learn from than the tutorial since the scope is narrower than the instruction that Steve Sanderson gives.  It really allows you play along as you learn.

This post focuses on getting started with RavenDB, so we’ll set aside our focus on workflows for a bit.  It’s included in the ApprovaFlow series as it is an important part of the workflow framework we’re building.  To follow along you might want to get the source code.

RavenDB is a document database that provides a flexible means for storing object graphs.  As you’ll see a document database presents you with a different set of challenges than you are normally presented when using a traditional relational database.

The storage “unit” in RavenDB is a schema-less JSON document.  This takes the form of:  Because you are working with documents you now have the flexibility to define documents differently; that is, you can support variations to your data without have to re-craft your data model each time you want to add a new property to a class.  You can adopt a “star” pattern for SQL as depicted here, but querying can become difficult.  Raven excels in this situation and one such sweet spot is:

Dynamic Entities, such as user-customizable entities, entities with a large number of optional fields, etc. – Raven’s schema free nature means that you don’t have to fight a relational model to implement it.

Installing and Running RavenDB

The compiled binaries are easy to install.  Download the latest build and extract the files to a share.  Note that in order to run the console you are required to install Silverlight.  To start the server, navigate to the folder[] and double click “Start.cmd”.  You will see a screen similar to this one once the server is up and running:

The console will launch it self and will resemble this:

How To Start Developing

In Visual Studio, reference Raven with Raven.Client.Lightweight.  For CRUD operations and querying this will be all that you will need.

First you will need to connect to the document store.  It is recommended that you do this once per application.  That is accomplished with


var documentStore = new DocumentStore {Url = "http://localhost:8080"};
documentStore.Initialize();

Procedures are carried out using the Unit of Work pattern, and in general you will be using these type of blocks:


using(var session = documentStore.OpenSession())
{
   //... Do some work
}

RavenDB will work with Plain Old C# Objects and only requires an Id property of type string.  An identity key is generated for Id during this session.  If were were to create multiple steps we would have identities created in succession.  A full discussion of the alternatives to the Id property is here.

Creating a document from your POCOs’ object graphs is very straight forward:


public class Person
{
    public string FirstName { get; set; }
	public string LastName { get; set; }
	public string Id { get; set; }
	public int DepartmentId { get; set; }
    // ...
}

var person = new Person();

using(var session = documentStore.OpenSession())
{
   session.Store(person);
   session.SaveChanges();
}

Fetching a document can be accomplished in two manners:  by Id or with a LINQ query.  Here’s how to get a document by id:


string person = "Person/1";  //  Raven will have auto-generated a value for us.
using(var session = documentStore.OpenSession())
{
   var fetchedPerson = session.Load<Person>(personId);
   //Do some more work
}

You’ll note that there is no casting or conversion required as Raven will determine the object type and populate the properties for you.

There are naturally cases where you want to query for documents based on attributes other than the Id. Best practices guides that we should create static indexes on our documents as these will offer the best performance. RavenDB also has a dynamic index feature that learns from queries fired at the server and over time these dynamic indexes are memorialized.

For your first bout with RavenDB you can simply query the documents with LINQ.   The test code takes advantage of the dynamic feature.  Later you will want to create indexes based on how you most likely will retrieve the documents.  This is different that a traditional RDMS solution, where the data is optimized for querying.  A document database is NOT.

Continuing with our example of Person documents we would use:


int departmentId = 139;

using(var session = documentStore.OpenSession())
{
   var people = session.Query<Person>()
                          .Where(x => x.DepartmentId == departmentId)
                          .ToList();
}

In the source code for this post there are more examples of querying.

Debugging, Troubleshooting and Dealing with Frustration

Given that this is something new and an open source project you may find yourself searching for help and more guidelines.  One thing to avail yourself of while troubleshooting is the fact that RavenDB has REST interface and you can validate your assumptions – or worse, confirm your errors – by using curl from the command line.  For example, to create a document via http you issue:

curl -X POST http://localhost:8080/docs -d "{ FirstName: 'Bob', LastName: 'Smith', Address: '5 Elm St' }"

Each action that takes place on the RavenDB server is displayed in a log on the server console app.  Sensei had to resort to this technique when troubleshooting some issues when he first started.  This StackOverflow question details the travails.

Another area that threw Sensei for a loop at first was the nature of the RavenDB writing and maintaining indexes.  In short, indexing is a background process, and Raven is designed to be “eventually consistent”.  That means that there can be a latency between when a change is submitted, saved, and indexed in the repository so that it can be fetched via queries.  When running tests from NUnit this code did not operate as expected, yet the console reported that the document was created:


session.Store(teamMember);

int posttestCount = session.Query<TeamMember>()
                .Count();

According to the documentation you can overcome this inconsistency by declaring that you are willing to wait until RavenDB has completed its current write operation.   This code will get you the expected results:


int posttestCount = session.Query<TeamMember>()
              .Customize(x => x.WaitForNonStaleResults())
              .Count();

Depending on the number of tests you write you may wish to run RavenDB in Embedded mode for faster results.  This might prove useful for automated testing and builds.  The source code provided in this post does NOT use embedded mode; rather, you have need your server running as this gives you the opportunity to inspect documents and acclimate yourself to the database.

There is much more that you can do with RavenDB, such as creating indexes across documents, assign security to individual documents, and much more.  This primer should be enough to get you started.  Next post we’ll see how RavenDB will fit into the ApprovaFlow framework.  Grab the source, play around and get ready for the next exciting episode.

 

It’s been a while, and as usual Sensei has started something with such bravado and discovered that life offers more bluster and pounding than even he can anticipate.  Hopefully you haven’t given up on the series, ’cause Sensei hasn’t.  Hell, ApprovaFlow is constantly on the forefront, even though it appears that he’s taken a good powder.

Let’s recap our goals and talk about philosophy and direction.  In this long silence a few additional considerations have taken precedence, and this is a good opportunity to assess goals and re-align the development efforts.  In the end ApprovaFlow will:

 Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

  Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

 Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  Discussed inApprovaFlow:  Using the Pipe and Filter Pattern to Build a Workflow Processor

• Introduce new functionality while isolating the impact of the new changes. New components should not break old ones

• Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.

• Use one. aspx page to processes user input for any type of workflow.

• Provide ability to roll your own customizations to the front end or backend of your application.

The astute members of the audience will no doubt say “What about the technical objectives, like how are you going to store all the workflow data?  In flat files?  Will you give me alternatives for storage?  How will I create the workflows, by using Notepad?” Indeed, Sensei has pondered these issues as well and has accumulated a fair amount failed experiments with some being quite interesting.   Given time these little experiments may become posts as well, since there are interesting things to learn from these failures.

What ApprovaFlow Will Need To Provide:  Workflow Storage

The biggest issue is storage.  The point of using Stateless was that we wanted flexibility.  Recall that the state of our state machine can be represented with a mere integer or string.  Makes it pretty easy to store this in a database, or a document.  While you could map the Step and Workflow class to tables in SQL our domain is using JSON so it makes sense to gravitate to a storage solution that will easily support that format.  ApprovaFlow will use RavenDB as the document database, but will provide the opportunity for you to use a different solution if you wish.  You’ll find that RavenDB quite readily provides a document storage format for our workflows that is quite elegant.

As an aside, Sensei experimented with a great alternative to the NoSQL solutions called Sis0DB.  This open project provides you that ability to store you object graphs in SQL Server.  Time permitting Sensei will share some of his adventures with you regarding this neat project.

What ApprovaFlow Will Need to Provide:  Authorization of Actions

While Sensei was off in the weeds learning about RavenDB he discovered that Ayende created a fantastic mechanism for authorizing user actions on documents.  This authorization of activity can be a granular as denying / allowing updates to occur based on an operation.

Since we want to adhere to principles of flexibility the Authorization features will be implemented as a plug-in, so if you wish to roll your own mechanisms to govern workflow approvals you will be free to do so.

What ApprovaFlow Will Need to Provide:  Admin Tools

Yep.  Sensei is sick of using Notepad to create JSON documents as well.  We want to be able to create the states, the triggers and the target states and save.  We’ll want to assign the filters to specific states and save.  No more text fiddling. Period.  As Sensei is thinking about this, it seems that another pipeline can be created for administration.  Luckily we have a plug-in architecture so this should be rather straight forward.

Summing It All Up

These are really important things to consider, and as much as Sensei hates changing goals in mid stream the capabilities discussed above can make life much easier while implementing a workflow system.  In making the decision to use RavenDB the thought that “a storage solution should not shape the solution domain” kept raising its ugly maw.  But, so what.   We want to finish something, and admittedly this has been a challenge – just look at the lag between posts if you need a reminder.  If Sensei decided to include an IOC container just to remain “loosely” coupled to document storage we’ll get no where.  Would you really want to read those posts?  How boring.  Besides, Sensei doesn’t know how to do all that stuf – gonna stick to the stuff he thinks he knows.  Or at least the stuff he can fake.

This is the fourth in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net.  Source code for this post is here.

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="250" height="40" flashvars="hostname=cowbell.grooveshark.com&songID=1211234&style=metal&p=0" wmode="window"]

Last Time on ApprovaFlow

In the previous post we discussed how the Pipe and Filter pattern facilitated a robust mechanism for executing tasks prior and after a transition is completed by the workflow state machine.  This accomplished our third goal and to date we have completed:

Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

•  Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

•  Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  Discussed in ApprovaFlow:  Using the Pipe and Filter Pattern to Build a Workflow Processor

These goals remain:

• Introduce new functionality while isolating the impact of the new changes. New components should not break old ones

• Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.

• Use one. aspx page to processes user input for any type of workflow.

• Provide ability to roll your own customizations to the front end or backend of your application.

It’s the Small Changes After You Go Live That Upset You

The goal we’ll focus on next is Introduce new functionality while isolating the impact of the new changes. New components should not break old ones, as it’s the small upsetters that lurk around the corner that your users will think up that will keep you in the constant redeployment cycle. If we implement a plug-in system, then we can prevent the new features from breaking the current production system. Implementing these changes in isolation will lead to faster testing, validation and happier users.

We lucked out as our implementation of the Pipe And Filter pattern forced us to create objects with finite functionality.  If you recall each step in our workflow chain was implemented as a filter derived from FilterBase and this lends itself nicely to creating plug-ins.  The Pipe and Filter pattern forces us to have a filter for each unique action we wish to carry out.  To save data we have a SaveData filter, to validate that a user can supply a Trigger we have the ValidateUserTrigger, and so on.

“Great, Sensei, but aren’t we still constrained by the fact that we have to recompile and deploy any time we add new filters?  And, if I have to do that, why bother with the pattern in the first place?”

Well, we can easily reduce the need for re-deploying the application through the use of a plugin system where we read assemblies from a share and interrogate them by searching for a particular object type on application start up.  Each new feature will be a new filter.  This means you will be working with a small project that references ApprovaFlow to create new filters without disturbing the existing architecture.   We’ll also create a manifest of approved plug-ins so that we can control what is used and institute a little security since we wouldn’t want any plugin to be introduced surreptitiously.

Plug-in Implementation

The class FilterRegistry will perform the process of reading a share, fetching the object with type FilterBase, and register these components just like we do with our system components.  There are a few additions since the last version, as we now need to read and store the manifest for later comparison with the plug-ins.  The new method ReadManifest takes care of this new task:

<pre><code>private void ReadManifest()
{
string manifestSource = ConfigurationManager.AppSettings["ManifestSource"].ToString();

Enforce.That(string.IsNullOrEmpty(manifestSource) == false,
“FilterRegistry.ReadManifest – ManifestSource can not be null”);

var fileInfo = new FileInfo(manifestSource);

if (fileInfo.Exists == false)
{
throw new ApplicationException(“RequestPromotion.Configure – File not found”);
}

StreamReader sr = fileInfo.OpenText();
string json = sr.ReadToEnd();
sr.Close();

this.approvedFilters = JsonConvert.DeserializeObject>>(json);
}

</code></pre>

The manifest is merely a serialized list of FilterDefinitions. This is de-serialized into a list of approved filters.With the approved list the method LoadPlugin performs the action of reading the share and matching the FullName of the object type between the manifest entries and the methods in the assembly file:

<pre><code>

public void LoadPlugIn(string source)
{
Enforce.That(string.IsNullOrEmpty(source) == false,
“PlugInLoader.Load – source can not be null”);

AppDomain appDomain = AppDomain.CurrentDomain;
var assembly = Assembly.LoadFrom(source);

var types = assembly.GetTypes().ToList();

types.ForEach(type =>
{
var registerFilterDef = new FilterDefinition();

// Is type from assembly registered?
registerFilterDef = this.approvedFilters.Where(app => app.TypeFullName == type.FullName)
.SingleOrDefault();

if (registerFilterDef != null)
{
object obj = Activator.CreateInstance(type);
var filterDef = new FilterDefinition();
filterDef.Name = obj.ToString();
filterDef.FilterCategory = registerFilterDef.FilterCategory;
filterDef.FilterType = type;
filterDef.TypeFullName = type.FullName;
filterDef.Filter = AddCreateFilter(filterDef);

this.systemFilters.Add(filterDef);
}
});
}

</code></pre>

That’s it. We can now control what assemblies are included in our plug-in system.  Later we’ll create a tool that will help us create the manifest so we do not have to managed it by hand.

What We Can Do with this New Functionality

Let’s turn to our sample workflow to see what possibilities we can develop.  The test CanPromoteRedShirtOffLandingParty from the class WorkflowScenarios displays the capability of our workflow.  First lets review our workflow scenario.  We have created a workflow for the Starship Enterprise to allow members of a landing party to request to be left out of the mission.  Basically there is only one way to get out of landing party duty and that is if Kirk says it’s okay.  Here are the workflow’s State, Trigger and Target State combinations:

State Trigger Target State
RequestPromotionForm Complete FirstOfficerReview
FirstOfficerReview RequestInfo RequestPromotionForm
FirstOfficerReview Deny PromotionDenied
FirstOfficerReview Approve CaptainApproval
CaptainApproval OfficerJustify FirstOfficerReview
CaptainApproval Deny PromotionDenied
CaptainApproval Approve PromotedOffLandingParty

Recalling the plots from Star Trek, there were times that the medical officer could declare the commanding officer unfit for duty. Since the Enterprise was originally equipped with our workflow, we want to make just a small addition – not a modification – and give McCoy the ability to allow a red shirt to opt out of the landing party duty.

Here’s where our plugin system comes in handy.  Instead of adding more states and or branches to our workflow we’ll check for certain conditions when Kirk makes his decisions, and execute actions.  In order to help out McCoy the following filter is created in a separate project:

<pre><code>

public class CaptainUnfitForCommandFilter : FilterBase
{
protected override Step Process(Step input)
{
if(input.CanProcess & input.State == “CaptainApproval”)
{
bool kirkInfected = (bool)input.Parameters["KirkInfected"];

if(kirkInfected & input.Answer == “Deny”)
{
input.Parameters.Add(“MedicalOverride”, true);
input.Parameters.Add(“StarfleetEmail”, true);
input.ErrorList.Add(“Medical Override of Command”);
input.CanProcess = false;
}
}

return input;
}
}

</code></pre>

This plug-in is simple: check that the state is CaptainApproval and when the answer was “Deny” and Kirk has been infected, set the MedicalOverride flag and send Starfleet an email.

The class WorkflowScenarioTest.cs has the method CanAllowMcCoyToIssueUnfitForDuty() that demonstrates how the workflow will execute. We simply add the name of the plug-in to our list of post transition filters:

<pre><code>
string postFilterNames = “MorePlugins.TransporterRepairFilter;Plugins.CaptainUnfitForCommandFilter;SaveDataFilter;”;
</code></pre>

This portion of code uses the plug-in:

<pre><code>

// Captain Kirt denies request, but McCoy issues unfit for command
parameters.Add(“KirkInfected”, true);

step.Answer = “Deny”;
step.AnsweredBy = “Kirk”;
step.Participants = “Kirk”;
step.State = newState;

processor = new WorkflowProcessor(step, filterRegistry, workflow);
newState = processor.ConfigurePipeline(preFilterNames, postFilterNames)
.ConfigureStateMachine()
.ProcessAnswer()
.GetCurrentState();

// Medical override issued and email to Starfleet generated
bool medicalOverride = (bool)parameters["MedicalOverride"];
bool emailSent = (bool)parameters["StarfleetEmail"];

Assert.IsTrue(medicalOverride);
Assert.IsTrue(emailSent);
</code></pre>

Now you don’t have to hesitate with paranoia each time you need introduce a variation into your workflows. No more small upsetters lurking around the corner. Plus you can deliver these changes faster to your biggest fan, your customer. Source code is here.   Run through the tests and experiment for your self.

This is the third entry in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net.  Source code for this post is here.

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="250" height="40" flashvars="hostname=cowbell.grooveshark.com&songID=25367265&style=metal&p=0" wmode="window"]

What We’ve Accomplished Thus Far

In the last post we discussed how Stateless makes creating a lean workflow engine possible, and we saw that we were able to achieve two of our overall goals for ApprovaFlow.  Here’s what we accomplished:

Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.

So we have these goals left:

•. Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

Our next goal will be Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  We’ll use the Pipe and Filter Pattern to simplify the processing, and we’ll see that this approach not only streamlines how you handle variation in tasks, but also provides a clean method for extending our application abilities.


The advantage of breaking down the activities of a process is that you can create a series of inter-changeable actions.  There may be some cases where you want to re-order the order of operations at runtime and you can do so easily when the actions are individual components.

Before we proceed applying the Pipe and Filter pattern to our solution, we need to establish some nomenclature for our workflow processing.  The following chart lays out the vocab we’ll use for the rest of series.

Term Definition
State A stage of a workflow.
Trigger A message that tells the workflow how to change states.  If the state is “Phone Ringing” and the trigger is “Answer Phone” the new state for the phone would be “Off hook”.
StateConfig A StateConfig defines a pathway or transition from one state to another.  It is comprised of a State, the Trigger and the Target State.
Step A Step contains the workflow’s current State.  In the course of your workflow you may have many of the same type of steps differentiated by date and time.  In other words, when you workflow has looping capability, the workflow step for a state may be issued many times.
Answer The Step asks a question, waiting for the user response.  The answer the user provides is the trigger that will start the transition from one state to another.  The Answer becomes the Trigger that will change the State.
Workflow A series of Steps compromised of States, Triggers and their respective transition expressed as a series of State Configs.  Think of this as a definition of a process.
Workflow Instance The Workflow Instance is a running workflow.  The Steps of the Workflow Instance are governed by how the Steps are defined by a Workflow.

Essentially a framework for providing an extensible workflow system boils down to answering the following questions asked in this order:
• Is the user authorized to provide an Answer to trigger a change to the step’s State?
• Is a special data set required for this particular State that is not part of the Step properties?
• Is the data provided from the user sufficient / valid for triggering a transition in the Workflow Step’s State?
• Are there actions to be performed such as saving special data?
• Can the system execute custom actions based on the State’s Trigger?

This looks very similar to the Pipe and Filter pattern.  Every time a workflow processes a trigger, the questions we asked above must be answered.  Each question could be considered a filter in the pipe and filter scenario.

The five questions above become the basis for our workflow processor components.  For this post we’ll assume that all data will be simply fetched then saved with no special processing.  We’ll also assume that a Workflow Step is considered to be valid when the following elements are correctly supplied:

<pre><code>

public bool IsValidForWorkflowTransition()
{
return this.Enforce(“Step”, true)
.When(“AnsweredBy”, Janga.Validation.Compare.NotEqual, string.Empty)
.When(“Answer”, Janga.Validation.Compare.NotEqual, string.Empty)
.When(“State”, Janga.Validation.Compare.NotEqual, string.Empty)
.When(“WorkflowInstanceId”, Janga.Validation.Compare.NotEqual, string.Empty)
.IsValid;
}

public bool IsUserValidParticipant()
{
return this.Enforce(“Step”, true)
.When(“Participants”, Janga.Validation.Compare.Contains, this.AnsweredBy)
.IsValid;
}
</code></pre>

Our Workflow Processor will function in accordance with the Pipe and Filter pattern where no matter what type of workflow instance we wish to process, the questions that we listed above will be answered.  Later we will return to discuss points of where the workflow can execute actions respective to the workflow’s definition.

Workflow Processor Code In Depth

Well, how do we configure a Workflow Processor?  In other words, we want to process an actual workflow, but how will we know the workflow type and what to do?  Some of configuration steps were previewed in Simple Workflows With ApprovaFlow and Stateless and the same principles apply here with the Configure method.  Collect the States, the Triggers and the StateConfigs, load them into Stateless along with the current state and you are ready to accept or reject the Trigger for the next State.  The Workflow Processor will conduct these steps and here is the code:

<pre><code>public WorkflowProcessor ConfigureStateMachine()
{
Enforce.That(string.IsNullOrEmpty(this.step.State) == false,
“WorkflowProcessor.Confgiure – step.State can not be empty”);

this.stateMachine = new StateMachine(this.step.State);

// Get a distinct list of states with a trigger from state configuration
// “State => Trigger => TargetState
var states = this.workflow.StateConfigs.AsQueryable()
.Select(x => x.State)
.Distinct()
.Select(x => x)
.ToList();

// Assing triggers to states
states.ForEach(state =>
{
var triggers = this.workflow.StateConfigs.AsQueryable()
.Where(config => config.State == state)
.Select(config => new { Trigger = config.Trigger, TargeState = config.TargetState })
.ToList();

triggers.ForEach(trig =>
{
this.stateMachine.Configure(state).Permit(trig.Trigger, trig.TargeState);
});
});

return this;
}

</code></pre>

The Workflow Processor will need to know the current state of a workflow instance, the answer supplied, who supplied the answer, as well as any parameters that the filters will need fetching special data. This will be contained in the class Step.cs:

<pre><code>

#region Properties

public string WorkflowInstanceId { get; set; }
public string WorkflowId { get; set; }
public string StepId { get; set; }
public string State { get; set; }
public string PreviousState { get; set; }
public string Answer { get; set; }
public DateTime Created { get; set; }
public string AnsweredBy { get; set; }
public string Participants { get; set; }

public List ErrorList;
public bool CanProcess { get; set; }
public IDictionary Parameters { get; set; }

#endregion

#region Constructors

public Step(): this(string.Empty, string.Empty, string.Empty, string.Empty,
string.Empty, new DateTime(), string.Empty, string.Empty,
new Dictionary())
{ }

public Step(string workflowInstanceId, string stepId, string state, string previousState,
string answer, DateTime created, string answeredBy, string participants,
Dictionary parameters)
{
this.WorkflowInstanceId = workflowInstanceId;
this.StepId = stepId;
this.State = state;
this.PreviousState = previousState;
this.Answer = answer;
this.Created = created;
this.AnsweredBy = answeredBy;
this.Participants = participants;

this.ErrorList = new List();
this.Parameters = parameters;
}

#endregion

</code></pre>

Our goal with the Workflow Processor is to accept the users answer, process actions, and create the next Step base on the new State all in one pass.  We will create a pipeline of actions that will always be invoked.  Each action or “filter” will be a component that performs and individual task, such as determining if the step is answered by the correct user.  Each filter will point to the subsequent filter in the pipeline, and the succession of the filters can change easily if we see fit.  All that is needed is to add the filters to the pipeline in the order we want.  Here is the class schema for the Pipe and Filter processing:

We’ll quickly find that the information regarding whether the result of an action or the condition of a Step will need to be accessible to each of the filters.  The class Step is the natural place to store this information, so we will include a property CanProcess to indicate that a filter should be invoked, as well a List<string> to act as an error log.  This log can be passed back to the client to communicate any errors to the user.  Note that the Step class has the Dictionary property named “Parameters” that allows a filter to pass data on to next filter in the sequence.

Setting Up the Pipeline

The sequence of filter execution is controlled by the order that the filters are registered.  The class Pipeline is responsible for registering and executing the chain of filters.  Here is the method Register that accepts a filter and retains it for future processing:

We also record the name of the filter so that we may interrogate the pipeline should we want to know if a filter has already been registered.

Pipeline.Register returns a reference to itself, so we can chain together commands fluently:

<pre><code>
pipeline.Register(new ValidParticipantFilter())
.Register(new SaveDataFilter());

</code></pre>

The class FilterBase is the foundation of our filter components.  As stated earlier, each component will point the subsequent filter in the filter chain.  You’ll note that the class also has a Register method.  This takes on the task of point the current filter to the next, and this method is called by the Pipeline as it registers all of the filters. Here is FilterBase:

<pre><code>

public abstract class FilterBase : IFilter
{
private IFilter next;

protected abstract T Process(T input);

public T Execute(T input)
{
T val = Process(input);

if (this.next != null)
{
val = this.next.Execute(val);
}

return val;
}

public void Register(IFilter filter)
{
if (this.next == null)
{
this.next = filter;
}
else
{
this.next.Register(filter);
}
}
}

</code></pre>

The method Execute accepts input of type T, and in the Workflow Processor instance this will Step.  Basically the Execute method is a wrapper, as we call the abstract method Process.  Process will be overridden in each filter, and this will contain the logic specific to the tasks that will be performed.  The code for a filter is quite simple:

<pre><code>

public class ValidParticipantFilter : FilterBase
{
protected override Step Process(Step input)
{
if (input.CanProcess)
{
input.Parameters["ValidFired"] = true;
input.Parameters["FilterOrder"] += “ValidParticipantFilter;”;

input.CanProcess = input.IsUserValidParticipant();

if(input.CanProcess == false)
{
input.ErrorList.Add(“Invalid Pariticipant – ” + input.AnsweredBy);
}
}

return input;
}

</code></pre>

Here we check to see if we can process, then perform specific actions if appropriate.  Given that the filters have no knowledge of each other, we can see that they can be executed in any order.  In other words you could have a Pipeline that had filters Step1, Step2, Step3 and you could configure a different pipeline to execute Step3, Step1, and Step2.

FilterRegistry Organizes Your Filters

Because we want to be able to use our filters in different successions we’ll need to keep a registry of what is available to use and provide the ability to look up or query different filters depending on our processing needs.  This registry will be created on application start up and will contain all objects of type FilterBase.  Later we’ll add the ability for the registry to load assemblies from a share, so that you can add other filters as simple plugins.  Information about each filter retained in a class FilterDefinition, and the FilterRegistry is merely a glorified List of the FilterDefintions. When we want to create a pipeline of filters we will want to instantiate new copies. Using Expressions we can create Functions that will be stored with with our definition for each filter type.  Here is FilterDefinition:

public class FilterDefinition
{
public string Name { get; set; }
public string FilterCategory { get; set; }
public Type FilterType { get; set; }
public Func&gt; Filter{get; set;}

public FilterDefinition() { }
}

We’ll invoke the compiled delegate at runtime to create our filter.  The method AddCreateFilter handles this:

private Func&gt; AddCreateFilter(FilterDefinition filterDef)
{
var body = Expression.MemberInit(Expression.New(filterDef.FilterType));
return Expression.Lambda&gt;&gt;(body, null).Compile();
}

FilterRegistry is meant to be run once at start up so that all filters are registered and ready to use. You can imagine how slow it could become if every time you process a Workflow Step that you must interrogate all the assemblies.

Once you FilterRegistry has all assemblies registered you can query and create new combinations with the method GetFilters:

public IEnumerable&gt; GetFilters(string filterNames)
{
Enforce.That(string.IsNullOrEmpty(filterNames) == false,
"FilterRegistry.GetFilters - filterNames can not be null");

var returnFilters = new List&gt;();
var names = filterNames.Split(';').ToList();

names.ForEach(name =&gt;
{
var filter = this.filters.Where(x =&gt; x.Name == name)
.SingleOrDefault();

if (filter != null)
{
returnFilters.Add(filter.Filter.Invoke());
}
});

return returnFilters;
}

Pipeline can accept a list of filters along with the string that represents the order of execution.  The method RegisterFrom accepts a reference to the FilterRegistry along with the names of the filters you want to use.

In the case of the Workflow Processor, we need to divide our filters into pre-trigger and post-trigger activities. Referring back to our 5 questions that our processor asks, question 1 – 3 must be answered before we attempt to transition the Workflow State, while steps 4-5 must be answered after the transition has succeeded. The method ConfigurePipeline in WorkflowProcessor.cs accomplishes this task:

public WorkflowProcessor ConfigurePipeline(string preProcessFilterNames, string postProcessFilterNames)
{
Enforce.That(string.IsNullOrEmpty(preProcessFilterNames) == false,
"WorkflowProcessor.Configure - preProcessFilterNames can not be null");

Enforce.That(string.IsNullOrEmpty(postProcessFilterNames) == false,
"WorkflowProcessor.Configure - postProcessFilterNames can not be null");

var actionWrapper = new ActionWrapperFilter(this.ExecuteTriggerFilter);

this.pipeline.RegisterFromList(preProcessFilterNames, this.filterRegistry)
.Register(actionWrapper)
.RegisterFromList(postProcessFilterNames, this.filterRegistry);

return this;
}

Putting It all Together

A lot of talk and theory, so how does this all fit together?  The test class WorkflowScenarioTests illustrates how our processor works.  We are creating a workflow that implements the process for a Red Shirt requesting a promotion off a landing party.  You may recall that the dude wearing the red shirt usually got killed with in the first few minutes of Star Trek, so this workflow will help those poor saps get off the death list.  The configuration for the Workflow is contained within the file RedShirtPromotion.json.  There are a few simple rules that we want to enforce with the Workflow.  For one, Spock must review the Red Shirt request, but Kirk will have the final say.

Here is a sample from the class WorkflowScenarioTests.cs:

string source = @"F:vs10devApprovaFlowSimpleWorkflowProcessorTestSuiteTestDataRedShirtPromotion.json";
string preFilterNames = "FetchDataFilter;ValidParticipantFilter;";
string postFilterNames = "SaveDataFilter";

var workflow = DeserializeWorkflow(source);

var parameters = new Dictionary();
parameters.Add("FilterOrder", string.Empty);
parameters.Add("FetchDataFired", false);
parameters.Add("SaveDataFired", false);
parameters.Add("ValidFired", false);

var step = new Step("13", "12", "RequestPromotionForm", "",
"Complete", DateTime.Now, "RedShirtGuy", "Data;RedShirtGuy",
parameters);
step.CanProcess = true;

var filterRegistry = new FilterRegistry();

var processor = new WorkflowProcessor(step, filterRegistry, workflow);
string newState = processor.ConfigurePipeline(preFilterNames, postFilterNames)
.ConfigureStateMachine()
.ProcessAnswer()
.GetCurrentState();

Assert.AreEqual("FirstOfficerReview", newState);

Study the tests.  We’ve covered a lot together and admittedly there is a lot swallow in this post.  In our next episode we’ll look at how to the Pipe and Filter pattern can help us with extending our workflow processor’s capability without causing us a lot of pain.  Here’s the source code.  Enjoy and check back soon for our next installment.  Sensei will let you take it on out with this groovy theme (click play).

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="250" height="40" flashvars="hostname=cowbell.grooveshark.com&songID=21973717&style=metal&p=0" wmode="window"]

This is the second in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net. Source code for this post is here.

Last time we laid out out goals for a simple workflow engine, ApprovaFlow, with the following objectives:
• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

The fulcrum point of all we have set out to do with ApprovaFlow is a state machine that will present a state and accept answers supplied by the users. One of Sensei’s misgivings about Windows Workflow is that it is such a behemoth when all you want to implement is a state machine.
Stateless, created Nicholas Blumhardt, is a shining example of adhering to the rule of “necessary and sufficient”. By using Generics Stateless allows you to create a state machine where the State and Trigger can be represented by an integer, string double, enum – say this sounds like it fulfills our goal:

•. Allow the state of a workflow to be persisted as an integer, string. Quicky fetch state of a workflow.
Stateless constructs a state machine with the following syntax:

var statemachine =
       new StateMachine(TState currentState);

For our discussion we will create a state machine that will process a request for promotion workflow. We’ll use:

var statemachine =
       new StateMachine(string currentstate);

This could very easily take the form of

&lt;int, int&gt;

and will depend on your preferences. Regardless of your choice, if the current state is represent by a primitive like int or string, you can just fetch that from a database or a repository and now your state machine is loaded with the current state. Contrast that with WF where you have multiple projects and confusing nomenclature to learn. Stateless just stays out of our way.
Let’s lay out our request for promotion workflow. Here is our state machine represented in English:

Step: Request Promotion Form
  Answer => Complete
  Next Step => Manager Review

Step: Manager Review
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Request Info
  Next Step => Request Promotion Form
  Answer => Approve
  Next Step => Vice President Approve

Step: Vice President Approve
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Manager Justify
  Next Step => Manager Review
  Answer => Approve
  Next Step => Promoted

Step: Promotion Denied
Step: Promoted

Remember the goal Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties? We are very close to achieving that goal. If we substitute “Step” with “State” and “Answer” with “Trigger”, then we have a model that matches how Stateless configures a state machine:

var statemachine = new StateMachine(startState);

//  Request Promo form states
statemachine.Configure(&quot;RequestPromotionForm&quot;)
               .Permit(&quot;Complete&quot;, &quot;ManagerReview&quot;);

//  Manager Review states
statemachine.Configure(&quot;ManagerReview&quot;)
               .Permit(&quot;RequestInfo&quot;, &quot;RequestPromotionForm&quot;)
               .Permit(&quot;Deny&quot;, &quot;PromotionDenied&quot;)
               .Permit(&quot;Approve&quot;, &quot;VicePresidentApprove&quot;);

Clearly you will not show the code to your business partners or end users, but a simple chart like this should not make anyone’s eyes glaze over:

State: Request Promotion Form
  Trigger => Complete
  Target State => Manager Review

Before we move on you may want to study the test in the file SimpleStateless.cs. Here configuring the state machine and advancing from state to state is laid out for you:

//  Request Promo form states
statemachine.Configure(&quot;RequestPromotionForm&quot;)
                    .Permit(&quot;Complete&quot;, &quot;ManagerReview&quot;);

//  Manager Review states
statemachine.Configure(&quot;ManagerReview&quot;)
                     .Permit(&quot;RequestInfo&quot;, &quot;RequestPromotionForm&quot;)
                     .Permit(&quot;Deny&quot;, &quot;PromotionDenied&quot;)
                     .Permit(&quot;Approve&quot;, &quot;VicePresidentApprove&quot;);

//  Vice President state configuration
statemachine.Configure(&quot;VicePresidentApprove&quot;)
                      .Permit(&quot;ManagerJustify&quot;, &quot;ManagerReview&quot;)
                      .Permit(&quot;Deny&quot;, &quot;PromotionDenied&quot;)
                      .Permit(&quot;Approve&quot;, &quot;Promoted&quot;);

//  Tests
Assert.AreEqual(startState, statemachine.State);

//  Move to next state
statemachine.Fire(&quot;Complete&quot;);
Assert.IsTrue(statemachine.IsInState(&quot;ManagerReview&quot;));

statemachine.Fire(&quot;Deny&quot;);
Assert.IsTrue(statemachine.IsInState(&quot;PromotionDenied&quot;));

The next question that comes to mind is how to represent the various States, Triggers and State configurations as data. Our mission on this project is to adhere to simplicity. One way to represent a Stateless state machine is with JSON:

{WorkflowType : &quot;RequestPromotion&quot;,
  States : [{Name : &quot;RequestPromotionForm&quot; ; DisplayName : &quot;Request Promotion Form&quot;}
    {Name : &quot;ManagerReview&quot;, DisplayName : &quot;Manager Review&quot;},
    {Name : &quot;VicePresidentApprove&quot;, DisplayName : &quot;Vice President Approve&quot;},
    {Name : &quot;PromotionDenied&quot;, DisplayName : &quot;Promotion Denied&quot;},
    {Name : &quot;Promoted&quot;, DisplayName : &quot;Promoted&quot;}
    ],
  Triggers : [{Name : &quot;Complete&quot;, DisplayName : &quot;Complete&quot;},
     {Name : &quot;Approve&quot;, DisplayName : &quot;Approve&quot;},
     {Name : &quot;RequestInfo&quot;, DisplayName : &quot;Request Info&quot;},
     {Name : &quot;ManagerJustify&quot;, DisplayName : &quot;Manager Justify&quot;},
     {Name : &quot;Deny&quot;, DisplayName : &quot;Deny&quot;}
  ],
StateConfigs : [{State : &quot;RequestPromotionForm&quot;, Trigger : &quot;Complete&quot;, TargetState : &quot;ManagerReview&quot;},
     {State : &quot;ManagerReview&quot;, Trigger : &quot;RequestInfo&quot;, TargetState : &quot;RequestPromotionForm&quot;},
     {State : &quot;ManagerReview&quot;, Trigger : &quot;Deny&quot;, TargetState : &quot;PromotionDenied&quot;},
     {State : &quot;ManagerReview&quot;, Trigger : &quot;Approve&quot;, TargetState : &quot;VicePresidentApprove&quot;},
     {State : &quot;VicePresidentApprove&quot;, Trigger : &quot;ManagerJustify&quot;, TargetState : &quot;ManagerApprove&quot;},
     {State : &quot;VicePresidentApprove&quot;, Trigger : &quot;Deny&quot;, TargetState : &quot;PromotionDenied&quot;},
     {State : &quot;VicePresidentApprove&quot;, Trigger : &quot;Approve&quot;, TargetState : &quot;Promoted&quot;}
  ]
}

As you can see we are storing all States and all Triggers with their display names. This will allow you some flexibility with UI screens and reports. Each rule for transitioning a state to another is stored in the StateConfigs node. Here we are simply representing our chart that we created above as JSON.

Since we have a standard way of representing a workflow with JSON de-serializing this definition to objects is straight forward. Here are the corresponding classes that define a state machine:

public class WorkflowDefinition
{
        public string WorkflowType { get; set; }
        public List States { get; set; }
        public List Triggers { get; set; }
        public List StateConfigs { get; set; }

        public WorkflowDefinition() { }
}

public class State
{
        public string Name { get; set; }
        public string DisplayName { get; set; }
}

public class Trigger
{
        public string Name { get; set; }
        public string DisplayName { get; set; }

        public Trigger() { }
}
public class StateConfig
{
        public string State { get; set; }
        public string Trigger { get; set; }
        public string TargetState { get; set; }

        public StateConfig() { }
}

We’ll close out this post with an example that will de-serialize our state machine definition and allow us to respond to the triggers that we supply. Basically it will be a rudimentary workflow. RequestionPromotion.cs will be the workflow processor. The method Configure is where we will perform the de-serialization, and the process is quite straight forward:

  1. Deserialize the States
  2. Deserialize the Triggers
  3. Deserialize the StateConfigs that contain the transitions from state to state
  4. For every StateConfig, configure the state machine.

Here’s the code:

public void Configure()
{
    Enforce.That((string.IsNullOrEmpty(source) == false),
                            &quot;RequestPromotion.Configure - source is null&quot;);

    string json = GetJson(source);

    var workflowDefintion = JsonConvert.DeserializeObject(json);

    Enforce.That((string.IsNullOrEmpty(startState) == false),
                            &quot;RequestPromotion.Configure - startStep is null&quot;);

    this.stateMachine = new StateMachine(startState);

    //  Get a distinct list of states with a trigger from state configuration
    //  &quot;State =&gt; Trigger =&gt; TargetState
    var states = workflowDefintion.StateConfigs.AsQueryable()
                                    .Select(x =&gt; x.State)
                                    .Distinct()
                                    .Select(x =&gt; x)
                                    .ToList();

    //  Assing triggers to states
    states.ForEach(state =&gt;
    {
        var triggers = workflowDefintion.StateConfigs.AsQueryable()
                                   .Where(config =&gt; config.State == state)
                                   .Select(config =&gt; new { Trigger = config.Trigger, TargeState = config.TargetState })
                                   .ToList();

        triggers.ForEach(trig =&gt;
        {
            this.stateMachine.Configure(state).Permit(trig.Trigger, trig.TargeState);
        });
    });
}

And we advance the workflow with this method:

public void ProgressToNextState(string trigger)
{
Enforce.That((string.IsNullOrEmpty(trigger) == false),
"RequestPromotion.ProgressToNextState – trigger is null");

this.stateMachine.Fire(trigger);
}

The class RequestPromotionTests.cs illustrates how this works.

We we have seen how we can fulfill the objectives laid out for ApprovaFlow and have covered a significant part of the functionality that Stateless will provide for our workflow engine.   Here is the source code.


ActiveEngine Software by ActiveEngine, LLC.