Blog Archive

Home
.Net

Compilers are what make the software world function.  Really, where would we be without ‘em?  But compilers are like clans – you stick with the family.  Just like Romeo and Juliet, you can’t marry your enemies sister’s cousin, it just ends up in tragedy.  For the longest time, the .Net and C# world has been considered an “enterprise” thing, and by that we do not mean Captain Kirk and Spock:  .Net is for corporations, not the web.  It’s not Nodejs.  It’s slow.  It’s for the nerds who ain’t the cool nerds.  So take your compiler and stay on your side of the street.

Well, these days technology is a lot like the Berlin wall coming down, and like the union of the eastern and western European countries, the lines are blurred with static compiled languages of C# and Javascript.  You see, these days you compile C# code to Javascript.  In fact, a dirty secret is that you could do that for many years now.

DuoCode Is The New Kid On The Block

A new entry in this field is DuoCode.  From the site:

“DuoCode is an alternative compiler, powered by Microsoft® Roslyn, and integrated in Visual Studio.

It magically cross-compiles your C# 6.0 code into high-quality readable JavaScript code, enabling rapid development of web applications utilizing the extensive features of the C# language, the Visual Studio IDE, and the .NET Framework base class libraries.

Development in C# with Visual Studio brings great productivity wins, thanks to strong-typing, code completion, compile-time error checking, static analysis, code navigation and refactoring.

Develop HTML5 applications using strongly-typed and documented class-definitions of the entire DOM class library (including HTML, CSS, SVG and WebGL definitions).”

DuoCode works with Visual Studio and will compile assemblies from multiple projects to Javascript.  DuoCode claims to support LINQ, classes, Generics, lambda expressions, extension methods, and many features that are the strengths of C#.

The Question:  Should All C# Capabilities Be Ported To Javascript?

Does Javascript need type checking and casting?  Yes, no need to check that a int is not a string, but tying yourself to a compiler for Javascript is going to introduce a different type of workflow for your client side development.  Part of the refreshing aspects of Javscript is not having to compile constanctly.  ”You have to ‘build’ your website, hehe …” meant you have to sit and wait for Visual Studio to compile and deploy before you could debug.  One of the main reasons for adpating MVC and leaving ASP.Net Webforms behind was to get away from the stilted, awkward development process when you wanted to examine an issue with your app’s UX.  Compiling your Javascripts feels a bit like a step backwards.

That said, LINQ and lambda expressions are a really good thing.  They’re like power tools for your boilerplate code. Javascript, while having tools like underscoreJS and lodash, is still an open field in this respect.  That’s not to say that there are no alternatives and efforts to build those type of capabilities; indeed, there are hundreds of open source projects and other efforts to create better functionality, and with EM6 on the horizon things like iterators will become part of the new Javascript specification.

Quite Hemming and Hawing – Why Would I Need This?

Ok, under what circumstances will you need this capability, and more importantly, is the generated Javascript any good?  In response to the former query, you may have a series of objects that are tested, run on your server, and you want those to

also run in your UX.  Maybe you have a state machine and a process that you want to run on a mobile device and instead of transpiling it to Mono or Swift you want to go the HTML 5 route.  Or, perhaps you want to port a portion of your code base to nodeJS for message brokering.  This could help tremendously.

Yet lurking in the background here is this:  what does the ported Javascript look like, and if there are issues where do you go to fix them?  Are they a Javascript issue, or are the origins from C#, compounded by the code compiling to Javascript?

So what do you think?  

This post is in our Javascript Primer series, a collections of articles aimed at .Net and back-end developers who are transition to client-side UX development.  In a previous installment, the topic of publish-subscribe design pattern was featured, and this pattern fostered loosely coupled ViewModels.  It is recommended that you read that post as well, as this edition of our primer builds on those concepts.

A basic tenant to good programming is to break down activities to small components.  Smaller functions are easier to maintain.  In many instances the information that the users are working with is best presented in logical groups, dissected so that an improper decision can never be made.  A rich user interface may force you to break a larger object into several ViewModels.  But guess what?  If you need to break information down, will also need to re-assemble this information to transmit back to mother ship to be stored in your database.

Challenges for the ViewModels

There are several challenges ahead for you if need your ViewModels to communicate, and you want to retain the agility provided to you with loose coupling.  If you recall the architecture we established with the publish and subscribe design pattern, all our “pieces” are highly independent from one another.  But we now have a complication.  Independence also means blissful ignorance.  Each ViewModel only knows how to contribute their small portions of information for glorious “mother object”.  The best it can is yield up what it has and somehow, someway, something will piece things together.

Let’s consider a code example:

We have simple project tracker that has a name, a roster of workers and a calendar that displays each workers start date.  Each tab is serviced by a ViewModel.  The “Project Info” tab shares the project name information with “Team”.  The tab “Team” shares the project team data with both the “Project Info” and the “Timeline” tabs.  This is accomplished using the publish and subscribe pattern that was detailed in this post.  You should review that post is your are unfamiliar with postaljs or with the publish and subscribe design pattern.

The challenge here is that we want a “Save” button that will easily gather the data from each tab and save it our project object.   We also want this controlling activity to be flexible and allow each ViewModel to supply the data needed without too much orchestration: in other words, we just ask the ViewModel “What do you have for a project object”, and each ViewModel fulfills its responsibility.

Pipe and Filter Pattern For Chaining Events or Commands

Many of you with a background in server side application development may recognize the pipe and filter pattern.  Simply said, it is a way to chain a set of operations that are to be performed in succession.  This diagram is a depiction of the pipe and filter:

PipesAndFilters

The “Filter” is a function that performs one thing.  Our pipe is the “orchestrator” or directory in that it will forward to each filter as ordered.

For our purposes we will continue to use postaljs as our communication mechanism.  We’ll need a pipeline that will keep track of which filter to execute and forward data to each subsequent filter of the process chain.

var pipeline = {
  index: 0,
  filters: ["edit.getViewOneData","edit.getViewTwoData","edit.finalDestination"]
 };

The property “filters” contains the names of the postaljs topics that will kick off a function.  In this case we will need 3 subscriptions that listen for items listed in the “filters” array.  Each process step, or filter, will be executed in the order that listed in the “filters” array: in this case, “edit.getViewOneData” will be executed first.  The property “index” tracks which filter is being processed.  As each filter is executed, it will increment index, then use index to access the next topic in the chain of events.  Starting off the process is accomplished in this fashion:

// Now start up the pipeline, fire off the first step that will execute the first filter
 postal.publish({
  channel: "pipenfilter",
  topic: pipeline.filters[pipeline.index],
  data: pipeline
 });

//  A sample subscription.  This will be executed first since it is listed first in the pipeline.filters array
postal.subscribe({
  channel: "pipenfilter",
  topic: "edit.getViewOneData",
  callback: function (pipeline, env) {
    fetchViewOneData(pipeline);
}
 });

//  After filter performs, it needs to call the next filter in the chain<
var fetchViewOneData = function(pipeline){
  // ... perform activities with some data

  //  To forward to next step, increment the index
  pipeline.index++;

  //  Now forward to the next filter
  postal.publish({
    channel: "pipenfilter",
    topic: pipeline.filters[pipeline.index],
    data: pipeline
  });
};

For our particular scenario with the project UX above we will have to do a few simple things:

  1. The ViewModels “ProjectInfo”, “Budget” and “Team” will need subscriptions to a “pipenfilter” channel
  2. Each ViewModel will need a corresponding filter function, so the “Team” ViewModel will get a “fetchTeamFilter”
  3. Add an endpoint in our $(document).ready section to act as a controller for the pipeline process.  Since we don’t have a controller this logic can sit here.  Naturally for cleaner production code you’ll want to be a bit cleaner and place this in a controller object.

All of the addition changes are noted with a “New pipe and filter” comment, so you can search through the Javascript to see where the changes have been made.  Most of the updates are just a few lines, and the nice thing is that the code to forward to the next filter is the same each time.  Not much exciting stuff to be seen other than incrementing the index and passing the data to the next topic.

Where No One Has Gone Before (just sayin’)

Now that you can synchronize your data, assemble it back into a proper object, you can begin your next journey.  There’s many long term benefits here that you will begin to realize.  What if you need to add an additional ViewModel, or what if you need to change which ViewModel display the groupings of data?  Without our pipe and filter solution, you would need to change a lot of things.  With what we have described here, the most that have to do is to add an additional filter name to the pipeline list and make sure that your additional ViewModel functioned properly.  The other ViewModels are left untouched.

Another benefit is also be that you can control what functions are fired while allowing the ViewModels to do their thing.  What if you had a ViewModel that published changes to other ViewModels after it finished performing calculations.  By ordering the filters in the pipeline accordingly, you can allow the ViewModels to do that hard work and be assured that all aspects of your data is up to date.  Doing more with less code means you can test and maintain this code with confidence.

This post focuses on getting started with RavenDB, so we’ll set aside our focus on workflows for a bit.  It’s included in the ApprovaFlow series as it is an important part of the workflow framework we’re building.  To follow along you might want to get the source code.

RavenDB is a document database that provides a flexible means for storing object graphs.  As you’ll see a document database presents you with a different set of challenges than you are normally presented when using a traditional relational database.

The storage “unit” in RavenDB is a schema-less JSON document.  This takes the form of:  Because you are working with documents you now have the flexibility to define documents differently; that is, you can support variations to your data without have to re-craft your data model each time you want to add a new property to a class.  You can adopt a “star” pattern for SQL as depicted here, but querying can become difficult.  Raven excels in this situation and one such sweet spot is:

Dynamic Entities, such as user-customizable entities, entities with a large number of optional fields, etc. – Raven’s schema free nature means that you don’t have to fight a relational model to implement it.

Installing and Running RavenDB

The compiled binaries are easy to install.  Download the latest build and extract the files to a share.  Note that in order to run the console you are required to install Silverlight.  To start the server, navigate to the folder[] and double click “Start.cmd”.  You will see a screen similar to this one once the server is up and running:

The console will launch it self and will resemble this:

How To Start Developing

In Visual Studio, reference Raven with Raven.Client.Lightweight.  For CRUD operations and querying this will be all that you will need.

First you will need to connect to the document store.  It is recommended that you do this once per application.  That is accomplished with


var documentStore = new DocumentStore {Url = "http://localhost:8080"};
documentStore.Initialize();

Procedures are carried out using the Unit of Work pattern, and in general you will be using these type of blocks:


using(var session = documentStore.OpenSession())
{
   //... Do some work
}

RavenDB will work with Plain Old C# Objects and only requires an Id property of type string.  An identity key is generated for Id during this session.  If were were to create multiple steps we would have identities created in succession.  A full discussion of the alternatives to the Id property is here.

Creating a document from your POCOs’ object graphs is very straight forward:


public class Person
{
    public string FirstName { get; set; }
	public string LastName { get; set; }
	public string Id { get; set; }
	public int DepartmentId { get; set; }
    // ...
}

var person = new Person();

using(var session = documentStore.OpenSession())
{
   session.Store(person);
   session.SaveChanges();
}

Fetching a document can be accomplished in two manners:  by Id or with a LINQ query.  Here’s how to get a document by id:


string person = "Person/1";  //  Raven will have auto-generated a value for us.
using(var session = documentStore.OpenSession())
{
   var fetchedPerson = session.Load<Person>(personId);
   //Do some more work
}

You’ll note that there is no casting or conversion required as Raven will determine the object type and populate the properties for you.

There are naturally cases where you want to query for documents based on attributes other than the Id. Best practices guides that we should create static indexes on our documents as these will offer the best performance. RavenDB also has a dynamic index feature that learns from queries fired at the server and over time these dynamic indexes are memorialized.

For your first bout with RavenDB you can simply query the documents with LINQ.   The test code takes advantage of the dynamic feature.  Later you will want to create indexes based on how you most likely will retrieve the documents.  This is different that a traditional RDMS solution, where the data is optimized for querying.  A document database is NOT.

Continuing with our example of Person documents we would use:


int departmentId = 139;

using(var session = documentStore.OpenSession())
{
   var people = session.Query<Person>()
                          .Where(x => x.DepartmentId == departmentId)
                          .ToList();
}

In the source code for this post there are more examples of querying.

Debugging, Troubleshooting and Dealing with Frustration

Given that this is something new and an open source project you may find yourself searching for help and more guidelines.  One thing to avail yourself of while troubleshooting is the fact that RavenDB has REST interface and you can validate your assumptions – or worse, confirm your errors – by using curl from the command line.  For example, to create a document via http you issue:

curl -X POST http://localhost:8080/docs -d "{ FirstName: 'Bob', LastName: 'Smith', Address: '5 Elm St' }"

Each action that takes place on the RavenDB server is displayed in a log on the server console app.  Sensei had to resort to this technique when troubleshooting some issues when he first started.  This StackOverflow question details the travails.

Another area that threw Sensei for a loop at first was the nature of the RavenDB writing and maintaining indexes.  In short, indexing is a background process, and Raven is designed to be “eventually consistent”.  That means that there can be a latency between when a change is submitted, saved, and indexed in the repository so that it can be fetched via queries.  When running tests from NUnit this code did not operate as expected, yet the console reported that the document was created:


session.Store(teamMember);

int posttestCount = session.Query<TeamMember>()
                .Count();

According to the documentation you can overcome this inconsistency by declaring that you are willing to wait until RavenDB has completed its current write operation.   This code will get you the expected results:


int posttestCount = session.Query<TeamMember>()
              .Customize(x => x.WaitForNonStaleResults())
              .Count();

Depending on the number of tests you write you may wish to run RavenDB in Embedded mode for faster results.  This might prove useful for automated testing and builds.  The source code provided in this post does NOT use embedded mode; rather, you have need your server running as this gives you the opportunity to inspect documents and acclimate yourself to the database.

There is much more that you can do with RavenDB, such as creating indexes across documents, assign security to individual documents, and much more.  This primer should be enough to get you started.  Next post we’ll see how RavenDB will fit into the ApprovaFlow framework.  Grab the source, play around and get ready for the next exciting episode.

 

It’s been a while, and as usual Sensei has started something with such bravado and discovered that life offers more bluster and pounding than even he can anticipate.  Hopefully you haven’t given up on the series, ’cause Sensei hasn’t.  Hell, ApprovaFlow is constantly on the forefront, even though it appears that he’s taken a good powder.

Let’s recap our goals and talk about philosophy and direction.  In this long silence a few additional considerations have taken precedence, and this is a good opportunity to assess goals and re-align the development efforts.  In the end ApprovaFlow will:

 Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

  Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

 Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  Discussed inApprovaFlow:  Using the Pipe and Filter Pattern to Build a Workflow Processor

• Introduce new functionality while isolating the impact of the new changes. New components should not break old ones

• Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.

• Use one. aspx page to processes user input for any type of workflow.

• Provide ability to roll your own customizations to the front end or backend of your application.

The astute members of the audience will no doubt say “What about the technical objectives, like how are you going to store all the workflow data?  In flat files?  Will you give me alternatives for storage?  How will I create the workflows, by using Notepad?” Indeed, Sensei has pondered these issues as well and has accumulated a fair amount failed experiments with some being quite interesting.   Given time these little experiments may become posts as well, since there are interesting things to learn from these failures.

What ApprovaFlow Will Need To Provide:  Workflow Storage

The biggest issue is storage.  The point of using Stateless was that we wanted flexibility.  Recall that the state of our state machine can be represented with a mere integer or string.  Makes it pretty easy to store this in a database, or a document.  While you could map the Step and Workflow class to tables in SQL our domain is using JSON so it makes sense to gravitate to a storage solution that will easily support that format.  ApprovaFlow will use RavenDB as the document database, but will provide the opportunity for you to use a different solution if you wish.  You’ll find that RavenDB quite readily provides a document storage format for our workflows that is quite elegant.

As an aside, Sensei experimented with a great alternative to the NoSQL solutions called Sis0DB.  This open project provides you that ability to store you object graphs in SQL Server.  Time permitting Sensei will share some of his adventures with you regarding this neat project.

What ApprovaFlow Will Need to Provide:  Authorization of Actions

While Sensei was off in the weeds learning about RavenDB he discovered that Ayende created a fantastic mechanism for authorizing user actions on documents.  This authorization of activity can be a granular as denying / allowing updates to occur based on an operation.

Since we want to adhere to principles of flexibility the Authorization features will be implemented as a plug-in, so if you wish to roll your own mechanisms to govern workflow approvals you will be free to do so.

What ApprovaFlow Will Need to Provide:  Admin Tools

Yep.  Sensei is sick of using Notepad to create JSON documents as well.  We want to be able to create the states, the triggers and the target states and save.  We’ll want to assign the filters to specific states and save.  No more text fiddling. Period.  As Sensei is thinking about this, it seems that another pipeline can be created for administration.  Luckily we have a plug-in architecture so this should be rather straight forward.

Summing It All Up

These are really important things to consider, and as much as Sensei hates changing goals in mid stream the capabilities discussed above can make life much easier while implementing a workflow system.  In making the decision to use RavenDB the thought that “a storage solution should not shape the solution domain” kept raising its ugly maw.  But, so what.   We want to finish something, and admittedly this has been a challenge – just look at the lag between posts if you need a reminder.  If Sensei decided to include an IOC container just to remain “loosely” coupled to document storage we’ll get no where.  Would you really want to read those posts?  How boring.  Besides, Sensei doesn’t know how to do all that stuf – gonna stick to the stuff he thinks he knows.  Or at least the stuff he can fake.

This is the fourth in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net.  Source code for this post is here.

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="250" height="40" flashvars="hostname=cowbell.grooveshark.com&songID=1211234&style=metal&p=0" wmode="window"]

Last Time on ApprovaFlow

In the previous post we discussed how the Pipe and Filter pattern facilitated a robust mechanism for executing tasks prior and after a transition is completed by the workflow state machine.  This accomplished our third goal and to date we have completed:

Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

•  Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

•  Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  Discussed in ApprovaFlow:  Using the Pipe and Filter Pattern to Build a Workflow Processor

These goals remain:

• Introduce new functionality while isolating the impact of the new changes. New components should not break old ones

• Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.

• Use one. aspx page to processes user input for any type of workflow.

• Provide ability to roll your own customizations to the front end or backend of your application.

It’s the Small Changes After You Go Live That Upset You

The goal we’ll focus on next is Introduce new functionality while isolating the impact of the new changes. New components should not break old ones, as it’s the small upsetters that lurk around the corner that your users will think up that will keep you in the constant redeployment cycle. If we implement a plug-in system, then we can prevent the new features from breaking the current production system. Implementing these changes in isolation will lead to faster testing, validation and happier users.

We lucked out as our implementation of the Pipe And Filter pattern forced us to create objects with finite functionality.  If you recall each step in our workflow chain was implemented as a filter derived from FilterBase and this lends itself nicely to creating plug-ins.  The Pipe and Filter pattern forces us to have a filter for each unique action we wish to carry out.  To save data we have a SaveData filter, to validate that a user can supply a Trigger we have the ValidateUserTrigger, and so on.

“Great, Sensei, but aren’t we still constrained by the fact that we have to recompile and deploy any time we add new filters?  And, if I have to do that, why bother with the pattern in the first place?”

Well, we can easily reduce the need for re-deploying the application through the use of a plugin system where we read assemblies from a share and interrogate them by searching for a particular object type on application start up.  Each new feature will be a new filter.  This means you will be working with a small project that references ApprovaFlow to create new filters without disturbing the existing architecture.   We’ll also create a manifest of approved plug-ins so that we can control what is used and institute a little security since we wouldn’t want any plugin to be introduced surreptitiously.

Plug-in Implementation

The class FilterRegistry will perform the process of reading a share, fetching the object with type FilterBase, and register these components just like we do with our system components.  There are a few additions since the last version, as we now need to read and store the manifest for later comparison with the plug-ins.  The new method ReadManifest takes care of this new task:

<pre><code>private void ReadManifest()
{
string manifestSource = ConfigurationManager.AppSettings["ManifestSource"].ToString();

Enforce.That(string.IsNullOrEmpty(manifestSource) == false,
“FilterRegistry.ReadManifest – ManifestSource can not be null”);

var fileInfo = new FileInfo(manifestSource);

if (fileInfo.Exists == false)
{
throw new ApplicationException(“RequestPromotion.Configure – File not found”);
}

StreamReader sr = fileInfo.OpenText();
string json = sr.ReadToEnd();
sr.Close();

this.approvedFilters = JsonConvert.DeserializeObject>>(json);
}

</code></pre>

The manifest is merely a serialized list of FilterDefinitions. This is de-serialized into a list of approved filters.With the approved list the method LoadPlugin performs the action of reading the share and matching the FullName of the object type between the manifest entries and the methods in the assembly file:

<pre><code>

public void LoadPlugIn(string source)
{
Enforce.That(string.IsNullOrEmpty(source) == false,
“PlugInLoader.Load – source can not be null”);

AppDomain appDomain = AppDomain.CurrentDomain;
var assembly = Assembly.LoadFrom(source);

var types = assembly.GetTypes().ToList();

types.ForEach(type =>
{
var registerFilterDef = new FilterDefinition();

// Is type from assembly registered?
registerFilterDef = this.approvedFilters.Where(app => app.TypeFullName == type.FullName)
.SingleOrDefault();

if (registerFilterDef != null)
{
object obj = Activator.CreateInstance(type);
var filterDef = new FilterDefinition();
filterDef.Name = obj.ToString();
filterDef.FilterCategory = registerFilterDef.FilterCategory;
filterDef.FilterType = type;
filterDef.TypeFullName = type.FullName;
filterDef.Filter = AddCreateFilter(filterDef);

this.systemFilters.Add(filterDef);
}
});
}

</code></pre>

That’s it. We can now control what assemblies are included in our plug-in system.  Later we’ll create a tool that will help us create the manifest so we do not have to managed it by hand.

What We Can Do with this New Functionality

Let’s turn to our sample workflow to see what possibilities we can develop.  The test CanPromoteRedShirtOffLandingParty from the class WorkflowScenarios displays the capability of our workflow.  First lets review our workflow scenario.  We have created a workflow for the Starship Enterprise to allow members of a landing party to request to be left out of the mission.  Basically there is only one way to get out of landing party duty and that is if Kirk says it’s okay.  Here are the workflow’s State, Trigger and Target State combinations:

State Trigger Target State
RequestPromotionForm Complete FirstOfficerReview
FirstOfficerReview RequestInfo RequestPromotionForm
FirstOfficerReview Deny PromotionDenied
FirstOfficerReview Approve CaptainApproval
CaptainApproval OfficerJustify FirstOfficerReview
CaptainApproval Deny PromotionDenied
CaptainApproval Approve PromotedOffLandingParty

Recalling the plots from Star Trek, there were times that the medical officer could declare the commanding officer unfit for duty. Since the Enterprise was originally equipped with our workflow, we want to make just a small addition – not a modification – and give McCoy the ability to allow a red shirt to opt out of the landing party duty.

Here’s where our plugin system comes in handy.  Instead of adding more states and or branches to our workflow we’ll check for certain conditions when Kirk makes his decisions, and execute actions.  In order to help out McCoy the following filter is created in a separate project:

<pre><code>

public class CaptainUnfitForCommandFilter : FilterBase
{
protected override Step Process(Step input)
{
if(input.CanProcess & input.State == “CaptainApproval”)
{
bool kirkInfected = (bool)input.Parameters["KirkInfected"];

if(kirkInfected & input.Answer == “Deny”)
{
input.Parameters.Add(“MedicalOverride”, true);
input.Parameters.Add(“StarfleetEmail”, true);
input.ErrorList.Add(“Medical Override of Command”);
input.CanProcess = false;
}
}

return input;
}
}

</code></pre>

This plug-in is simple: check that the state is CaptainApproval and when the answer was “Deny” and Kirk has been infected, set the MedicalOverride flag and send Starfleet an email.

The class WorkflowScenarioTest.cs has the method CanAllowMcCoyToIssueUnfitForDuty() that demonstrates how the workflow will execute. We simply add the name of the plug-in to our list of post transition filters:

<pre><code>
string postFilterNames = “MorePlugins.TransporterRepairFilter;Plugins.CaptainUnfitForCommandFilter;SaveDataFilter;”;
</code></pre>

This portion of code uses the plug-in:

<pre><code>

// Captain Kirt denies request, but McCoy issues unfit for command
parameters.Add(“KirkInfected”, true);

step.Answer = “Deny”;
step.AnsweredBy = “Kirk”;
step.Participants = “Kirk”;
step.State = newState;

processor = new WorkflowProcessor(step, filterRegistry, workflow);
newState = processor.ConfigurePipeline(preFilterNames, postFilterNames)
.ConfigureStateMachine()
.ProcessAnswer()
.GetCurrentState();

// Medical override issued and email to Starfleet generated
bool medicalOverride = (bool)parameters["MedicalOverride"];
bool emailSent = (bool)parameters["StarfleetEmail"];

Assert.IsTrue(medicalOverride);
Assert.IsTrue(emailSent);
</code></pre>

Now you don’t have to hesitate with paranoia each time you need introduce a variation into your workflows. No more small upsetters lurking around the corner. Plus you can deliver these changes faster to your biggest fan, your customer. Source code is here.   Run through the tests and experiment for your self.

This is the third entry in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net.  Source code for this post is here.

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="250" height="40" flashvars="hostname=cowbell.grooveshark.com&songID=25367265&style=metal&p=0" wmode="window"]

What We’ve Accomplished Thus Far

In the last post we discussed how Stateless makes creating a lean workflow engine possible, and we saw that we were able to achieve two of our overall goals for ApprovaFlow.  Here’s what we accomplished:

Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.

So we have these goals left:

•. Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

Our next goal will be Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  We’ll use the Pipe and Filter Pattern to simplify the processing, and we’ll see that this approach not only streamlines how you handle variation in tasks, but also provides a clean method for extending our application abilities.


The advantage of breaking down the activities of a process is that you can create a series of inter-changeable actions.  There may be some cases where you want to re-order the order of operations at runtime and you can do so easily when the actions are individual components.

Before we proceed applying the Pipe and Filter pattern to our solution, we need to establish some nomenclature for our workflow processing.  The following chart lays out the vocab we’ll use for the rest of series.

Term Definition
State A stage of a workflow.
Trigger A message that tells the workflow how to change states.  If the state is “Phone Ringing” and the trigger is “Answer Phone” the new state for the phone would be “Off hook”.
StateConfig A StateConfig defines a pathway or transition from one state to another.  It is comprised of a State, the Trigger and the Target State.
Step A Step contains the workflow’s current State.  In the course of your workflow you may have many of the same type of steps differentiated by date and time.  In other words, when you workflow has looping capability, the workflow step for a state may be issued many times.
Answer The Step asks a question, waiting for the user response.  The answer the user provides is the trigger that will start the transition from one state to another.  The Answer becomes the Trigger that will change the State.
Workflow A series of Steps compromised of States, Triggers and their respective transition expressed as a series of State Configs.  Think of this as a definition of a process.
Workflow Instance The Workflow Instance is a running workflow.  The Steps of the Workflow Instance are governed by how the Steps are defined by a Workflow.

Essentially a framework for providing an extensible workflow system boils down to answering the following questions asked in this order:
• Is the user authorized to provide an Answer to trigger a change to the step’s State?
• Is a special data set required for this particular State that is not part of the Step properties?
• Is the data provided from the user sufficient / valid for triggering a transition in the Workflow Step’s State?
• Are there actions to be performed such as saving special data?
• Can the system execute custom actions based on the State’s Trigger?

This looks very similar to the Pipe and Filter pattern.  Every time a workflow processes a trigger, the questions we asked above must be answered.  Each question could be considered a filter in the pipe and filter scenario.

The five questions above become the basis for our workflow processor components.  For this post we’ll assume that all data will be simply fetched then saved with no special processing.  We’ll also assume that a Workflow Step is considered to be valid when the following elements are correctly supplied:

<pre><code>

public bool IsValidForWorkflowTransition()
{
return this.Enforce(“Step”, true)
.When(“AnsweredBy”, Janga.Validation.Compare.NotEqual, string.Empty)
.When(“Answer”, Janga.Validation.Compare.NotEqual, string.Empty)
.When(“State”, Janga.Validation.Compare.NotEqual, string.Empty)
.When(“WorkflowInstanceId”, Janga.Validation.Compare.NotEqual, string.Empty)
.IsValid;
}

public bool IsUserValidParticipant()
{
return this.Enforce(“Step”, true)
.When(“Participants”, Janga.Validation.Compare.Contains, this.AnsweredBy)
.IsValid;
}
</code></pre>

Our Workflow Processor will function in accordance with the Pipe and Filter pattern where no matter what type of workflow instance we wish to process, the questions that we listed above will be answered.  Later we will return to discuss points of where the workflow can execute actions respective to the workflow’s definition.

Workflow Processor Code In Depth

Well, how do we configure a Workflow Processor?  In other words, we want to process an actual workflow, but how will we know the workflow type and what to do?  Some of configuration steps were previewed in Simple Workflows With ApprovaFlow and Stateless and the same principles apply here with the Configure method.  Collect the States, the Triggers and the StateConfigs, load them into Stateless along with the current state and you are ready to accept or reject the Trigger for the next State.  The Workflow Processor will conduct these steps and here is the code:

<pre><code>public WorkflowProcessor ConfigureStateMachine()
{
Enforce.That(string.IsNullOrEmpty(this.step.State) == false,
“WorkflowProcessor.Confgiure – step.State can not be empty”);

this.stateMachine = new StateMachine(this.step.State);

// Get a distinct list of states with a trigger from state configuration
// “State => Trigger => TargetState
var states = this.workflow.StateConfigs.AsQueryable()
.Select(x => x.State)
.Distinct()
.Select(x => x)
.ToList();

// Assing triggers to states
states.ForEach(state =>
{
var triggers = this.workflow.StateConfigs.AsQueryable()
.Where(config => config.State == state)
.Select(config => new { Trigger = config.Trigger, TargeState = config.TargetState })
.ToList();

triggers.ForEach(trig =>
{
this.stateMachine.Configure(state).Permit(trig.Trigger, trig.TargeState);
});
});

return this;
}

</code></pre>

The Workflow Processor will need to know the current state of a workflow instance, the answer supplied, who supplied the answer, as well as any parameters that the filters will need fetching special data. This will be contained in the class Step.cs:

<pre><code>

#region Properties

public string WorkflowInstanceId { get; set; }
public string WorkflowId { get; set; }
public string StepId { get; set; }
public string State { get; set; }
public string PreviousState { get; set; }
public string Answer { get; set; }
public DateTime Created { get; set; }
public string AnsweredBy { get; set; }
public string Participants { get; set; }

public List ErrorList;
public bool CanProcess { get; set; }
public IDictionary Parameters { get; set; }

#endregion

#region Constructors

public Step(): this(string.Empty, string.Empty, string.Empty, string.Empty,
string.Empty, new DateTime(), string.Empty, string.Empty,
new Dictionary())
{ }

public Step(string workflowInstanceId, string stepId, string state, string previousState,
string answer, DateTime created, string answeredBy, string participants,
Dictionary parameters)
{
this.WorkflowInstanceId = workflowInstanceId;
this.StepId = stepId;
this.State = state;
this.PreviousState = previousState;
this.Answer = answer;
this.Created = created;
this.AnsweredBy = answeredBy;
this.Participants = participants;

this.ErrorList = new List();
this.Parameters = parameters;
}

#endregion

</code></pre>

Our goal with the Workflow Processor is to accept the users answer, process actions, and create the next Step base on the new State all in one pass.  We will create a pipeline of actions that will always be invoked.  Each action or “filter” will be a component that performs and individual task, such as determining if the step is answered by the correct user.  Each filter will point to the subsequent filter in the pipeline, and the succession of the filters can change easily if we see fit.  All that is needed is to add the filters to the pipeline in the order we want.  Here is the class schema for the Pipe and Filter processing:

We’ll quickly find that the information regarding whether the result of an action or the condition of a Step will need to be accessible to each of the filters.  The class Step is the natural place to store this information, so we will include a property CanProcess to indicate that a filter should be invoked, as well a List<string> to act as an error log.  This log can be passed back to the client to communicate any errors to the user.  Note that the Step class has the Dictionary property named “Parameters” that allows a filter to pass data on to next filter in the sequence.

Setting Up the Pipeline

The sequence of filter execution is controlled by the order that the filters are registered.  The class Pipeline is responsible for registering and executing the chain of filters.  Here is the method Register that accepts a filter and retains it for future processing:

We also record the name of the filter so that we may interrogate the pipeline should we want to know if a filter has already been registered.

Pipeline.Register returns a reference to itself, so we can chain together commands fluently:

<pre><code>
pipeline.Register(new ValidParticipantFilter())
.Register(new SaveDataFilter());

</code></pre>

The class FilterBase is the foundation of our filter components.  As stated earlier, each component will point the subsequent filter in the filter chain.  You’ll note that the class also has a Register method.  This takes on the task of point the current filter to the next, and this method is called by the Pipeline as it registers all of the filters. Here is FilterBase:

<pre><code>

public abstract class FilterBase : IFilter
{
private IFilter next;

protected abstract T Process(T input);

public T Execute(T input)
{
T val = Process(input);

if (this.next != null)
{
val = this.next.Execute(val);
}

return val;
}

public void Register(IFilter filter)
{
if (this.next == null)
{
this.next = filter;
}
else
{
this.next.Register(filter);
}
}
}

</code></pre>

The method Execute accepts input of type T, and in the Workflow Processor instance this will Step.  Basically the Execute method is a wrapper, as we call the abstract method Process.  Process will be overridden in each filter, and this will contain the logic specific to the tasks that will be performed.  The code for a filter is quite simple:

<pre><code>

public class ValidParticipantFilter : FilterBase
{
protected override Step Process(Step input)
{
if (input.CanProcess)
{
input.Parameters["ValidFired"] = true;
input.Parameters["FilterOrder"] += “ValidParticipantFilter;”;

input.CanProcess = input.IsUserValidParticipant();

if(input.CanProcess == false)
{
input.ErrorList.Add(“Invalid Pariticipant – ” + input.AnsweredBy);
}
}

return input;
}

</code></pre>

Here we check to see if we can process, then perform specific actions if appropriate.  Given that the filters have no knowledge of each other, we can see that they can be executed in any order.  In other words you could have a Pipeline that had filters Step1, Step2, Step3 and you could configure a different pipeline to execute Step3, Step1, and Step2.

FilterRegistry Organizes Your Filters

Because we want to be able to use our filters in different successions we’ll need to keep a registry of what is available to use and provide the ability to look up or query different filters depending on our processing needs.  This registry will be created on application start up and will contain all objects of type FilterBase.  Later we’ll add the ability for the registry to load assemblies from a share, so that you can add other filters as simple plugins.  Information about each filter retained in a class FilterDefinition, and the FilterRegistry is merely a glorified List of the FilterDefintions. When we want to create a pipeline of filters we will want to instantiate new copies. Using Expressions we can create Functions that will be stored with with our definition for each filter type.  Here is FilterDefinition:

public class FilterDefinition
{
public string Name { get; set; }
public string FilterCategory { get; set; }
public Type FilterType { get; set; }
public Func&gt; Filter{get; set;}

public FilterDefinition() { }
}

We’ll invoke the compiled delegate at runtime to create our filter.  The method AddCreateFilter handles this:

private Func&gt; AddCreateFilter(FilterDefinition filterDef)
{
var body = Expression.MemberInit(Expression.New(filterDef.FilterType));
return Expression.Lambda&gt;&gt;(body, null).Compile();
}

FilterRegistry is meant to be run once at start up so that all filters are registered and ready to use. You can imagine how slow it could become if every time you process a Workflow Step that you must interrogate all the assemblies.

Once you FilterRegistry has all assemblies registered you can query and create new combinations with the method GetFilters:

public IEnumerable&gt; GetFilters(string filterNames)
{
Enforce.That(string.IsNullOrEmpty(filterNames) == false,
"FilterRegistry.GetFilters - filterNames can not be null");

var returnFilters = new List&gt;();
var names = filterNames.Split(';').ToList();

names.ForEach(name =&gt;
{
var filter = this.filters.Where(x =&gt; x.Name == name)
.SingleOrDefault();

if (filter != null)
{
returnFilters.Add(filter.Filter.Invoke());
}
});

return returnFilters;
}

Pipeline can accept a list of filters along with the string that represents the order of execution.  The method RegisterFrom accepts a reference to the FilterRegistry along with the names of the filters you want to use.

In the case of the Workflow Processor, we need to divide our filters into pre-trigger and post-trigger activities. Referring back to our 5 questions that our processor asks, question 1 – 3 must be answered before we attempt to transition the Workflow State, while steps 4-5 must be answered after the transition has succeeded. The method ConfigurePipeline in WorkflowProcessor.cs accomplishes this task:

public WorkflowProcessor ConfigurePipeline(string preProcessFilterNames, string postProcessFilterNames)
{
Enforce.That(string.IsNullOrEmpty(preProcessFilterNames) == false,
"WorkflowProcessor.Configure - preProcessFilterNames can not be null");

Enforce.That(string.IsNullOrEmpty(postProcessFilterNames) == false,
"WorkflowProcessor.Configure - postProcessFilterNames can not be null");

var actionWrapper = new ActionWrapperFilter(this.ExecuteTriggerFilter);

this.pipeline.RegisterFromList(preProcessFilterNames, this.filterRegistry)
.Register(actionWrapper)
.RegisterFromList(postProcessFilterNames, this.filterRegistry);

return this;
}

Putting It all Together

A lot of talk and theory, so how does this all fit together?  The test class WorkflowScenarioTests illustrates how our processor works.  We are creating a workflow that implements the process for a Red Shirt requesting a promotion off a landing party.  You may recall that the dude wearing the red shirt usually got killed with in the first few minutes of Star Trek, so this workflow will help those poor saps get off the death list.  The configuration for the Workflow is contained within the file RedShirtPromotion.json.  There are a few simple rules that we want to enforce with the Workflow.  For one, Spock must review the Red Shirt request, but Kirk will have the final say.

Here is a sample from the class WorkflowScenarioTests.cs:

string source = @"F:vs10devApprovaFlowSimpleWorkflowProcessorTestSuiteTestDataRedShirtPromotion.json";
string preFilterNames = "FetchDataFilter;ValidParticipantFilter;";
string postFilterNames = "SaveDataFilter";

var workflow = DeserializeWorkflow(source);

var parameters = new Dictionary();
parameters.Add("FilterOrder", string.Empty);
parameters.Add("FetchDataFired", false);
parameters.Add("SaveDataFired", false);
parameters.Add("ValidFired", false);

var step = new Step("13", "12", "RequestPromotionForm", "",
"Complete", DateTime.Now, "RedShirtGuy", "Data;RedShirtGuy",
parameters);
step.CanProcess = true;

var filterRegistry = new FilterRegistry();

var processor = new WorkflowProcessor(step, filterRegistry, workflow);
string newState = processor.ConfigurePipeline(preFilterNames, postFilterNames)
.ConfigureStateMachine()
.ProcessAnswer()
.GetCurrentState();

Assert.AreEqual("FirstOfficerReview", newState);

Study the tests.  We’ve covered a lot together and admittedly there is a lot swallow in this post.  In our next episode we’ll look at how to the Pipe and Filter pattern can help us with extending our workflow processor’s capability without causing us a lot of pain.  Here’s the source code.  Enjoy and check back soon for our next installment.  Sensei will let you take it on out with this groovy theme (click play).

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="250" height="40" flashvars="hostname=cowbell.grooveshark.com&songID=21973717&style=metal&p=0" wmode="window"]

This is the second in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net. Source code for this post is here.

Last time we laid out out goals for a simple workflow engine, ApprovaFlow, with the following objectives:
• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

The fulcrum point of all we have set out to do with ApprovaFlow is a state machine that will present a state and accept answers supplied by the users. One of Sensei’s misgivings about Windows Workflow is that it is such a behemoth when all you want to implement is a state machine.
Stateless, created Nicholas Blumhardt, is a shining example of adhering to the rule of “necessary and sufficient”. By using Generics Stateless allows you to create a state machine where the State and Trigger can be represented by an integer, string double, enum – say this sounds like it fulfills our goal:

•. Allow the state of a workflow to be persisted as an integer, string. Quicky fetch state of a workflow.
Stateless constructs a state machine with the following syntax:

var statemachine =
       new StateMachine(TState currentState);

For our discussion we will create a state machine that will process a request for promotion workflow. We’ll use:

var statemachine =
       new StateMachine(string currentstate);

This could very easily take the form of

&lt;int, int&gt;

and will depend on your preferences. Regardless of your choice, if the current state is represent by a primitive like int or string, you can just fetch that from a database or a repository and now your state machine is loaded with the current state. Contrast that with WF where you have multiple projects and confusing nomenclature to learn. Stateless just stays out of our way.
Let’s lay out our request for promotion workflow. Here is our state machine represented in English:

Step: Request Promotion Form
  Answer => Complete
  Next Step => Manager Review

Step: Manager Review
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Request Info
  Next Step => Request Promotion Form
  Answer => Approve
  Next Step => Vice President Approve

Step: Vice President Approve
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Manager Justify
  Next Step => Manager Review
  Answer => Approve
  Next Step => Promoted

Step: Promotion Denied
Step: Promoted

Remember the goal Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties? We are very close to achieving that goal. If we substitute “Step” with “State” and “Answer” with “Trigger”, then we have a model that matches how Stateless configures a state machine:

var statemachine = new StateMachine(startState);

//  Request Promo form states
statemachine.Configure(&quot;RequestPromotionForm&quot;)
               .Permit(&quot;Complete&quot;, &quot;ManagerReview&quot;);

//  Manager Review states
statemachine.Configure(&quot;ManagerReview&quot;)
               .Permit(&quot;RequestInfo&quot;, &quot;RequestPromotionForm&quot;)
               .Permit(&quot;Deny&quot;, &quot;PromotionDenied&quot;)
               .Permit(&quot;Approve&quot;, &quot;VicePresidentApprove&quot;);

Clearly you will not show the code to your business partners or end users, but a simple chart like this should not make anyone’s eyes glaze over:

State: Request Promotion Form
  Trigger => Complete
  Target State => Manager Review

Before we move on you may want to study the test in the file SimpleStateless.cs. Here configuring the state machine and advancing from state to state is laid out for you:

//  Request Promo form states
statemachine.Configure(&quot;RequestPromotionForm&quot;)
                    .Permit(&quot;Complete&quot;, &quot;ManagerReview&quot;);

//  Manager Review states
statemachine.Configure(&quot;ManagerReview&quot;)
                     .Permit(&quot;RequestInfo&quot;, &quot;RequestPromotionForm&quot;)
                     .Permit(&quot;Deny&quot;, &quot;PromotionDenied&quot;)
                     .Permit(&quot;Approve&quot;, &quot;VicePresidentApprove&quot;);

//  Vice President state configuration
statemachine.Configure(&quot;VicePresidentApprove&quot;)
                      .Permit(&quot;ManagerJustify&quot;, &quot;ManagerReview&quot;)
                      .Permit(&quot;Deny&quot;, &quot;PromotionDenied&quot;)
                      .Permit(&quot;Approve&quot;, &quot;Promoted&quot;);

//  Tests
Assert.AreEqual(startState, statemachine.State);

//  Move to next state
statemachine.Fire(&quot;Complete&quot;);
Assert.IsTrue(statemachine.IsInState(&quot;ManagerReview&quot;));

statemachine.Fire(&quot;Deny&quot;);
Assert.IsTrue(statemachine.IsInState(&quot;PromotionDenied&quot;));

The next question that comes to mind is how to represent the various States, Triggers and State configurations as data. Our mission on this project is to adhere to simplicity. One way to represent a Stateless state machine is with JSON:

{WorkflowType : &quot;RequestPromotion&quot;,
  States : [{Name : &quot;RequestPromotionForm&quot; ; DisplayName : &quot;Request Promotion Form&quot;}
    {Name : &quot;ManagerReview&quot;, DisplayName : &quot;Manager Review&quot;},
    {Name : &quot;VicePresidentApprove&quot;, DisplayName : &quot;Vice President Approve&quot;},
    {Name : &quot;PromotionDenied&quot;, DisplayName : &quot;Promotion Denied&quot;},
    {Name : &quot;Promoted&quot;, DisplayName : &quot;Promoted&quot;}
    ],
  Triggers : [{Name : &quot;Complete&quot;, DisplayName : &quot;Complete&quot;},
     {Name : &quot;Approve&quot;, DisplayName : &quot;Approve&quot;},
     {Name : &quot;RequestInfo&quot;, DisplayName : &quot;Request Info&quot;},
     {Name : &quot;ManagerJustify&quot;, DisplayName : &quot;Manager Justify&quot;},
     {Name : &quot;Deny&quot;, DisplayName : &quot;Deny&quot;}
  ],
StateConfigs : [{State : &quot;RequestPromotionForm&quot;, Trigger : &quot;Complete&quot;, TargetState : &quot;ManagerReview&quot;},
     {State : &quot;ManagerReview&quot;, Trigger : &quot;RequestInfo&quot;, TargetState : &quot;RequestPromotionForm&quot;},
     {State : &quot;ManagerReview&quot;, Trigger : &quot;Deny&quot;, TargetState : &quot;PromotionDenied&quot;},
     {State : &quot;ManagerReview&quot;, Trigger : &quot;Approve&quot;, TargetState : &quot;VicePresidentApprove&quot;},
     {State : &quot;VicePresidentApprove&quot;, Trigger : &quot;ManagerJustify&quot;, TargetState : &quot;ManagerApprove&quot;},
     {State : &quot;VicePresidentApprove&quot;, Trigger : &quot;Deny&quot;, TargetState : &quot;PromotionDenied&quot;},
     {State : &quot;VicePresidentApprove&quot;, Trigger : &quot;Approve&quot;, TargetState : &quot;Promoted&quot;}
  ]
}

As you can see we are storing all States and all Triggers with their display names. This will allow you some flexibility with UI screens and reports. Each rule for transitioning a state to another is stored in the StateConfigs node. Here we are simply representing our chart that we created above as JSON.

Since we have a standard way of representing a workflow with JSON de-serializing this definition to objects is straight forward. Here are the corresponding classes that define a state machine:

public class WorkflowDefinition
{
        public string WorkflowType { get; set; }
        public List States { get; set; }
        public List Triggers { get; set; }
        public List StateConfigs { get; set; }

        public WorkflowDefinition() { }
}

public class State
{
        public string Name { get; set; }
        public string DisplayName { get; set; }
}

public class Trigger
{
        public string Name { get; set; }
        public string DisplayName { get; set; }

        public Trigger() { }
}
public class StateConfig
{
        public string State { get; set; }
        public string Trigger { get; set; }
        public string TargetState { get; set; }

        public StateConfig() { }
}

We’ll close out this post with an example that will de-serialize our state machine definition and allow us to respond to the triggers that we supply. Basically it will be a rudimentary workflow. RequestionPromotion.cs will be the workflow processor. The method Configure is where we will perform the de-serialization, and the process is quite straight forward:

  1. Deserialize the States
  2. Deserialize the Triggers
  3. Deserialize the StateConfigs that contain the transitions from state to state
  4. For every StateConfig, configure the state machine.

Here’s the code:

public void Configure()
{
    Enforce.That((string.IsNullOrEmpty(source) == false),
                            &quot;RequestPromotion.Configure - source is null&quot;);

    string json = GetJson(source);

    var workflowDefintion = JsonConvert.DeserializeObject(json);

    Enforce.That((string.IsNullOrEmpty(startState) == false),
                            &quot;RequestPromotion.Configure - startStep is null&quot;);

    this.stateMachine = new StateMachine(startState);

    //  Get a distinct list of states with a trigger from state configuration
    //  &quot;State =&gt; Trigger =&gt; TargetState
    var states = workflowDefintion.StateConfigs.AsQueryable()
                                    .Select(x =&gt; x.State)
                                    .Distinct()
                                    .Select(x =&gt; x)
                                    .ToList();

    //  Assing triggers to states
    states.ForEach(state =&gt;
    {
        var triggers = workflowDefintion.StateConfigs.AsQueryable()
                                   .Where(config =&gt; config.State == state)
                                   .Select(config =&gt; new { Trigger = config.Trigger, TargeState = config.TargetState })
                                   .ToList();

        triggers.ForEach(trig =&gt;
        {
            this.stateMachine.Configure(state).Permit(trig.Trigger, trig.TargeState);
        });
    });
}

And we advance the workflow with this method:

public void ProgressToNextState(string trigger)
{
Enforce.That((string.IsNullOrEmpty(trigger) == false),
"RequestPromotion.ProgressToNextState – trigger is null");

this.stateMachine.Fire(trigger);
}

The class RequestPromotionTests.cs illustrates how this works.

We we have seen how we can fulfill the objectives laid out for ApprovaFlow and have covered a significant part of the functionality that Stateless will provide for our workflow engine.   Here is the source code.

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="204" height="40" flashvars="hostname=cowbell.grooveshark.com&widgetID=25064281&style=water&p=0" allowScriptAccess="always" wmode="window" ]

Like Tolkien, Sensei wants to create the landscapes, cultures and languages before he writes his next epic. You can be the judge whether the work is a series of sketches and notes like the Silmarillion or cohesive, compelling story that you want read again and again. As a bonus Sensei will deliver working software that hopefully will be of use to you.  (Photo credit - utnapistim).

The epic will be called ApprovaFlow. ApprovaFlow is a framework / process / methodology that allows you to create workflow applications that are easy to deploy and are configurable. With ApprovaFlow Sensei hopes to demonstrate how to readily encorporate the inevitable changes that your users will ask of you. Deliver changes effortlessly and without groans. Cast off the chains inconvenient builds and focus on creating solutions that stay out of the users way.

Ok. Managent wants bullet points so here are our goals for ApprovaFlow:

• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

There it is. These goals will probably take us a good amount of time to review and implement. Is it worth it? Hell yeah. We’ll end up with one simple project instead of a bloated framework where it takes forever to find anything. A nice by product will be that you can spend more time thinking about how to solve your users problems rather than trying to figure out a monsterous framework that requires a huge investment of energy and time learning how to get simple things done.

Some gifts just keep on giving, and many times things can just take on a momentum that grow beyond your expectation.  Bob Sherwood wrote to Sensei and pointed out that DataTables.net supports multiple column sorting.  All you do is hold down the shift key and click on any second or third column and DataTables will add that column to sort criteria.  ”Well, how come it doesn’t work with the server side solution?”  Talk about the sound of one hand clapping.  How about that for a flub!  Sensei didn’t think of that!  Then panic set in – would this introduce new complexity to the DataTablePager solution, making it too difficult to maintain a clean implementation?  After some long thought it seemed that a solution could be neatly added.  Before reading, you should download the latest code to follow along.

How DataTables.Net Communicates Which Columns Are Involved in a Sort

If you recall, DataTables.Net uses a structure called aoData to communicate to the server what columns are needed, the page size, and whether a column is a data element or a client side custom column.  We covered that in the last DataTablePager post.  aoData also has a convention for sorting:

bSortColumn_X=ColumnPosition

In our example we are working with the following columns:

,Name,Agent,Center,,CenterId,DealAmount

where column 0 is a custom client side column, column 1 is Name (a mere data column), column 2 is Center (another data column), column 3 is a custom client side column, and the remaining columns are just data columns.

If we are sorting just by Name, then aoData will contain the following:

bSortColumn_0=1

When we wish to sort by Center, then by Name we get the following in aoData”

bSortColumn_0=2

bSortColumn_1=1

In other words, the first column we want to sort by is in position 2 (Center) and the second column(Name) is in position 1.  We’ll want to record this some where so that we can pass this to our order routine.  aoData passes all column information to us on the server, but we’ll have to parse through the columns and check to see if one or many of the columns is actually involved in a sort request and as we do we’ll need to preserve the order of that column of data in the sort.

SearchAndSortable Class to the Rescue

You’ll recall that we have a class called SearchAndSortable that defines how the column is used by the client.  Since we iterate over all the columns in aoData it makes sense that we should take this opportunity to see if any column is involved in a sort and store that information in SearchAndSortable as well.  The new code for the class looks like this:

public class SearchAndSortable
    {
        public string Name { get; set; }
        public int ColumnIndex { get; set; }
        public bool IsSearchable { get; set; }
        public bool IsSortable { get; set; }
        public PropertyInfo Property{ get; set; }
        public int SortOrder { get; set; }
        public bool IsCurrentlySorted { get; set; }
        public string SortDirection { get; set; }

        public SearchAndSortable(string name, int columnIndex, bool isSearchable,
                                bool isSortable)
        {
            this.Name = name;
            this.ColumnIndex = columnIndex;
            this.IsSearchable = isSearchable;
            this.IsSortable = IsSortable;
        }

        public SearchAndSortable() : this(string.Empty, 0, true, true) { }
    }

There are 3 new additions:

IsCurrentlySorted - is this column included in the sort request.

SortDirection - “asc” or “desc” for ascending and descending.

SortOrder - the order of the column in the sort request.  Is it the first or second column in a multicolumn sort.

As we walk through the column definitions, we’ll look to see if each column is involved in a sort and record what direction – ascending or descending – is required. From our previous post you’ll remember that the method PrepAOData is where we parse our column definitions. Here is the new code:

//  Sort columns
this.sortKeyPrefix = aoDataList.Where(x =&gt; x.Name.StartsWith(INDIVIDUAL_SORT_KEY_PREFIX))
                                            .Select(x =&gt; x.Value)
                                            .ToList();

//  Column list
var cols = aoDataList.Where(x =&gt; x.Name == &quot;sColumns&quot;
                                            &amp; string.IsNullOrEmpty(x.Value) == false)
                                     .SingleOrDefault();

if(cols == null)
{
  this.columns = new List();
}
else
{
  this.columns = cols.Value
                       .Split(',')
                       .ToList();
}

//  What column is searchable and / or sortable
//  What properties from T is identified by the columns
var properties = typeof(T).GetProperties();
int i = 0;

//  Search and store all properties from T
this.columns.ForEach(col =&gt;
{
  if (string.IsNullOrEmpty(col) == false)
  {
    var searchable = new SearchAndSortable(col, i, false, false);
    var searchItem = aoDataList.Where(x =&gt; x.Name == BSEARCHABLE + i.ToString())
                                     .ToList();
    searchable.IsSearchable = (searchItem[0].Value == &quot;False&quot;) ? false : true;
    searchable.Property = properties.Where(x =&gt; x.Name == col)
                                                    .SingleOrDefault();

    searchAndSortables.Add(searchable);
  }

  i++;
});

//  Sort
searchAndSortables.ForEach(sortable =&gt; {
  var sort = aoDataList.Where(x =&gt; x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
                                            .ToList();
  sortable.IsSortable = (sort[0].Value == &quot;False&quot;) ? false : true;
                sortable.SortOrder = -1;

  //  Is this item amongst currently sorted columns?
  int order = 0;
  this.sortKeyPrefix.ForEach(keyPrefix =&gt; {
    if (sortable.ColumnIndex == Convert.ToInt32(keyPrefix))
    {
      sortable.IsCurrentlySorted = true;

      //  Is this the primary sort column or secondary?
      sortable.SortOrder = order;

     //  Ascending or Descending?
     var ascDesc = aoDataList.Where(x =&gt; x.Name == &quot;sSortDir_&quot; + order)
                                                    .SingleOrDefault();
     if(ascDesc != null)
     {
       sortable.SortDirection = ascDesc.Value;
     }
   }

   order++;
 });
});

To sum up, we’ll traverse all of the columns listed in sColumns. For each column we’ll grab the PorpertyInfo from our underlying object of type T. This gives only those properties that will be displayed in the grid on the client. If the column is marked as searchable, we indicate that by setting the IsSearchable property on the SearchAndSortable class.  This happens starting at line 28 through 43.

Next we need to determine what we can sort, and will traverse the new list of SearchAndSortables we created. DataTables will tell us what if the column can be sorted by with following convention:

bSortable_ColNumber = True

So if the column Center were to be “sortable” aoData would contain:

bSortable_1 = True

We record the sortable state as shown on line 49 in the code listing.

Now that we know whether we can sort on this column, we have to look through the sort request and see if the column is actually involved in a sort.  We do that by looking at what DataTables.Net sent to us from the client.  Again the convention is to send bSortColumn_0=1 to indicate that the first column for the sort in the second item listed in sColumns property.  aoData will contain many bSortColum’s so we’ll walk through each one and record the order that column should take in the sort.  That occurs at line 55 where we match the column index with the bSortColumn_x value.

We’ll also determine what the sort direction – ascending or descending – should be.  At line 63 we get the direction of the sort and record this value in the SearchAndSortable.

When the method PrepAOData is completed, we have a complete map of all columns and what columns are being sorted, as well as their respective sort direction.  All of this was sent to us from the client and we are storing this configuration for later use.

Performing the Sort

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="204" height="40" flashvars="hostname=cowbell.grooveshark.com&widgetID=23379337&style=water&p=0" allowScriptAccess="always" wmode="window" ](Home stretch so play the song!!)

If you can picture what we have so far we just basically created a collection of column names, their respective PropertyInfo’s and have recorded which of these properties are involved in a sort.  At this stage we should be able to query this collection and get back those properties and the order that the sort applies.

You may already be aware that you can have a compound sort statement in LINQ with the following statement:

var sortedCustomers = customer.OrderBy(x =&gt; x.LastName)
                                           .ThenBy(x =&gt; x.FirstName);

The trick is to run through all the properties and create that compound statement. Remember when we recorded the position of the sort as an integer? This makes it easy for us to sort out the messy scenarios where the second column is the first column of a sort. SearchAndSortable.SortOrder takes care of this for us. Just get the data order by SortOrder in descending order and you’re good to go. So that code would look like the following:

var sorted = this.searchAndSortables.Where(x =&gt; x.IsCurrentlySorted == true)
                                     .OrderBy(x =&gt; x.SortOrder)
                                     .ToList();

sorted.ForEach(sort =&gt; {
             records = records.OrderBy(sort.Name, sort.SortDirection,
             (sort.SortOrder == 0) ? true : false);
});

On line 6 in the code above we are calling our extension method OrderBy in Extensions.cs. We pass the property name, the sort direction, and whether this is the first column of the sort. This last piece is important as it will create either “OrderBy” or the “ThenBy” for us. When it’s the first column, you guessed it we get “OrderBy”. Sensei found this magic on a StackOverflow post by Marc Gravell and others.

Here is the entire method ApplySort from DataTablePager.cs, and note how we still check for the initial display of the data grid and default to the first column that is sortable.

private IQueryable ApplySort(IQueryable records)
{
  var sorted = this.searchAndSortables.Where(x =&gt; x.IsCurrentlySorted == true)
                                                .OrderBy(x =&gt; x.SortOrder)
                                                .ToList();

  //  Are we at initialization of grid with no column selected?
  if (sorted.Count == 0)
  {
    string firstSortColumn = this.sortKeyPrefix.First();
    int firstColumn = int.Parse(firstSortColumn);

    string sortDirection = &quot;asc&quot;;
    sortDirection = this.aoDataList.Where(x =&gt; x.Name == INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX +                                                                    &quot;0&quot;)
                                                    .Single()
                                                    .Value
                                                    .ToLower();

    if (string.IsNullOrEmpty(sortDirection))
    {
      sortDirection = &quot;asc&quot;;
    }

    //  Initial display will set order to first column - column 0
    //  When column 0 is not sortable, find first column that is
    var sortable = this.searchAndSortables.Where(x =&gt; x.ColumnIndex == firstColumn)
                                                        .SingleOrDefault();
    if (sortable == null)
    {
      sortable = this.searchAndSortables.First(x =&gt; x.IsSortable);
    }

    return records.OrderBy(sortable.Name, sortDirection, true);
  }
  else
  {
      //  Traverse all columns selected for sort
      sorted.ForEach(sort =&gt; {
                             records = records.OrderBy(sort.Name, sort.SortDirection,
                            (sort.SortOrder == 0) ? true : false);
      });

    return records;
  }
}

It’s All in the Setup

Test it out. Hold down the shift key and select a second column and WHAMO – multiple column sorts! Hold down the shift key and click the same column twice and KAH-BLAMO multiple column sort with descending order on the second column!!!

The really cool thing is that our process on the server is being directed by DataTables.net on the client.  And even awseomer is that you have zero configuration on the server.  Most awesome-est is that this will work with all of your domain objects, because we have used generics we can apply this to any class in our domain.  So what are you doing to do with all that time you just got back?

Source code has been yet again updated!! Read about the changes in DataTablePager Now Has Multi-Column Sort Capability For DataTables.Net If you are new to DataTables.Net and Sensei’s paging solution and want to detailed study of how it works, work through this post first, then get the latest edition.  Note, code links in this post are to the first version.

The last episode of server-side paging with DataTablerPager for DataTables.Net we reviewed the basics of a server-side solution that paged records and returned results in the multiples as specified by DataTables.Net.  You will want to have read that post before preceding here.  The older version of the source is included in that post as well as this will help get you acclimated.  The following capabilities were reviewed:

  • The solution used generics and could work with any collection of IQueryable.  In short any of your classes from you domain solution  could be used.
  • Filtering capability across all properties was provided.  This included partial word matching, regardless of case.
  • Ordering of result set was in response to the column clicked on the client’s DataTables grid.

DataTablePager Enhancements

This past month Sensei has added new capabilities to the DataTablePager class that makes it an even better fit for use with DataTables.Net.  The new features are:

  • Dynamically select the columns from the properties of your class based on the column definitions supplied by DataTables.Net!!!
  • Exclude columns from sort or search based on configuration by DataTables.Net
  • Mix columns from your class properties with client-side only column definitions; e.g. create a column with <a href>’s that do not interfere with filtering, sorting, or other processing.

Before we jump into the nitty-gritty details let’s review how DataTables.Net allows you to control a column’s interaction with a data grid.  Grab the new source code to best follow along.

DataTables.Net Column Definition

You would think that there would be quite a few steps to keep your server-side data paging solution in concert with a client side implementation, and that would mean customization for each page.   DataTables.Net provides you with fine control over what your columns will do once displayed in a data grid.  Great, but does that mean a lot of configuration on the server side of the equation?  As well soon see, no, it doesn’t.  What is done on the client for configuration will be that you need to do.

The structure aoColumnDefs is the convention we use for column configuration.  From the DataTables.Net site:

aoColumnDefs: This array allows you to target a specific column, multiple columns, or all columns, using the aTargets property of each object in the array (please note that aoColumnDefs was introduced in DataTables 1.7). This allows great flexibility when creating tables, as the aoColumnDefs arrays can be of any length, targeting the columns you specifically want. The aTargets property is an array to target one of many columns and each element in it can be:

  • a string – class name will be matched on the TH for the column
  • 0 or a positive integer – column index counting from the left
  • a negative integer – column index counting from the right
  • the string “_all” – all columns (i.e. assign a default)

So in order for you to include columns in a sort you configure in this manner:

/* Using aoColumnDefs */
$(document).ready(function() {
	$('#example').dataTable( {
		&quot;aoColumnDefs&quot;: [
			{ &quot;bSortable&quot;: false, &quot;aTargets&quot;: [ 0 ] }
		] } );
} );

} );

In other words we are defining that the first column – column 0 – will not be included in the sorting operations.  When you review the columns options you’ll see you have options for applying css classes to multiple columns, can include a column in filtering, can supply custom rendering of a column, and much more.

In the example that we’ll use for the rest of the post we are going to provide the following capability for a data grid:

  1. The first column – column 0 – will be an action column with a hyperlink, and we will want to exclude it form sort and filtering functions.
  2. Only display a subset of the properties from a class.  Each of these columns should be sortable and filterable.
  3. Maintain the ability to chunk the result set in the multiples specified by DataTables.Net; that is, multiples of 10, 50, and 100.

Here is the configuration from the aspx page SpecifyColumns.aspx:

&quot;aoColumnDefs&quot; : [
   {&quot;fnRender&quot; : function(oObj){
      return &quot;&lt;a href=&quot;&amp;quot;center.aspx?centerid=&amp;quot;&quot;&gt;Edit&lt;/a&gt;&quot;;
   },
     &quot;bSortable&quot; : false,
     &quot;aTargets&quot; : [0]},
   {&quot;sName&quot; : &quot;Name&quot;,
     &quot;bSearchable&quot; : true,
     &quot;aTargets&quot;: [1]},
   {&quot;sName&quot; : &quot;Agent&quot;,
    &quot;bSearchable&quot; : true,
    &quot;bSortable&quot; : true,
    &quot;aTargets&quot; : [2]
   },
   {&quot;sName&quot; : &quot;Center&quot;, &quot;aTargets&quot;: [3]},
   {&quot;fnRender&quot; : function(oObj){
            return &quot;2nd Action List&quot;;
         },
     &quot;bSortable&quot; : false,
     &quot;aTargets&quot; : [4]},
   {&quot;sName&quot; : &quot;CenterId&quot;, &quot;bVisible&quot; : false, &quot;aTargets&quot; : [5]},
   {&quot;sName&quot; : &quot;DealAmount&quot;, &quot;aTargets&quot; : [6]}
]
  1. Column 0 is our custom column – do not sort or search on this content.  Look at oObj.aData[4] – this is a column that we’ll return but not display.  It’s referred so by the position in the data array that DataTables.Net expects back from the server.
  2. Columns 1 – 3 are data and can be sorted.  Note the use of “sName”.  This will be included in a named column list that corresponds to the source property from our class.  This will be very important later on for us, as it allows us to query our data and return it in any order to DataTables.Net.  DataTables will figure out what to do with it before it renders.
  3. Threw in another custom column.  Again, no sort or search, but we’ll see how this affects the server side implementation later on.  Hint – there’s no sName used here.
  4. Another data column.

To recap, we want to be able to define what data we need to display and how we want to interact with that data by only instructing DataTables.Net what to do.  We’re going to be lazy, and not do anything else – the class DataTablePager will respond to the instructions that DataTables.Net supplies, and that’s it.  We’ll review how to do this next.  Sensei thinks you’ll really dig it.

DataTablePager Class Handles your Client Side Requests

If you recall, DataTables.Net communicates to the server via the structure aoData.  Here is the summary of the parameters.  One additional parameter that we’ll need to parse is the sColumns parameter, and it will contain the names and order of the columns that DataTables.Net is rendering.  For our example, we’ll get the following list of columns if we were to debug on the server:

,Name,Agent,Center,,CenterId,DealAmount

These are all the columns we named with sName, plus a place holder for those custom columns that not found in our class.  This has several implications.  For one, it will mean that we will no longer be able to simply use reflection to get at our properties, filter them and send them back down to the client.  The client is now expecting an array where each row will have 7 things, 5 of which are named and two place holders for items that the client wants to reserve for itself.  Hence the convention of passing an empty item in the delimited string as shown above.

It will also mean that we’ll have to separate the columns that we can filter or sort.  Again this is the reason for leaving the custom column names blank.  In other words, we’ll have to keep track of the items that we can search and sort.  We’ll do this with a class called SearchAndSortable:

public class SearchAndSortable
    {
        public string Name { get; set; }
        public int ColumnIndex { get; set; }
        public bool IsSearchable { get; set; }
        public bool IsSortable { get; set; }
        public PropertyInfo Property{ get; set; }

        public SearchAndSortable(string name, int columnIndex, bool isSearchable, bool isSortable)
        {
            this.Name = name;
            this.ColumnIndex = columnIndex;
            this.IsSearchable = isSearchable;
            this.IsSortable = IsSortable;
        }

        public SearchAndSortable() : this(string.Empty, 0, true, true) { }
    }

This will summarize what we’re doing with our properties.   The property ColumnIndex will record the position in sColumn where our column occurs.  Since we’ll need access to the actual properties themselves we’ll store these in the SearchAndSortable as well so that we can reduce the number of calls that use reflection. DataTablePager uses a List of SortAndSearchables to track what’s going on.  We fill this list in the method PrepAOData()

//  What column is searchable and / or sortable
            //  What properties from T is identified by the columns
            var properties = typeof(T).GetProperties();
            int i = 0;

            //  Search and store all properties from T
            this.columns.ForEach(col =&gt;
            {
                if (string.IsNullOrEmpty(col) == false)
                {
                    var searchable = new SearchAndSortable(col, i, false, false);
                    var searchItem = aoDataList.Where(x =&gt; x.Name == BSEARCHABLE + i.ToString())
                                     .ToList();
                    searchable.IsSearchable = (searchItem[0].Value == &quot;False&quot;) ? false : true;
                    searchable.Property = properties.Where(x =&gt; x.Name == col)
                                                    .SingleOrDefault();

                    searchAndSortables.Add(searchable);
                }

                i++;
            });

            //  Sort
            searchAndSortables.ForEach(sortable =&gt; {
                var sort = aoDataList.Where(x =&gt; x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
                                            .ToList();
                sortable.IsSortable = (sort[0].Value == &quot;False&quot;) ? false : true;
            });

We’ll get the properties from our class. Next we’ll traverse the columns and match the property names with the names of the columns. When there is a match, we need to query aoData and get the column search and sort definitions based on the ordinal position of the column in the sColumns variable. DataTables.Net convention for communicating this is the form of:

bSortable_ + column index => “bSortable_1″ or “bSearchable_2″

We take care of that with this line of code:

var searchItem = aoDataList.Where(x =&gt; x.Name == BSEARCHABLE +
                                     i.ToString())
                                     .ToList();
searchable.IsSearchable = (searchItem[0].Value == &quot;False&quot;) ? false : true;

Now we go through the list of properties again but this time determine if we should sort any of the columns. That happens in the section //Sort. In the end we have a list of properties that corresponds with the columns DataTables.Net has requested, and we have defined if the property can be search (filtered) or sorted.

For filtering DataTablePager recall that we use the method GenericSearchFilter().  The only alteration here is that we only will add the properties to our query that are defined as searcable:

//  Create a list of searchable properties
            var filterProperties = this.searchAndSortables.Where(x =&gt;
                                        x.IsSearchable)
                                          .Select(x =&gt; x.Property)
                                          .ToList();

The rest of the method is unaltered from the prior version. Pretty cool!! Again, we’ll only get the properties that we declared as legal for filtering. We’ve also eliminated any chance of mixing a custom column in with our properties because we did not supply an sName in our configuration.

The method ApplySort() required one change. On the initial load of DataTable.Net, the client will pass up the request to sort on column 0 even though you may have excluded it. When that is the case, we’ll just look for the first column that is sortable and order by that column.

//  Initial display will set order to first column - column 0
//  When column 0 is not sortable, find first column that is
var sortable = this.searchAndSortables.Where(x =&gt; x.ColumnIndex ==
                                         firstColumn)
                              .SingleOrDefault();
if(sortable == null)
{
   sortable = this.searchAndSortables.First(x =&gt; x.IsSortable);
}

return records.OrderBy(sortable.Name, sortDirection, true);

After we have filtered and sorted the data set we can finally select the only those properties that we want to send to the client.  Recall that we have parsed a variable sColumns that tells what columns are expected.  We’ll pass these names onto extension method PropertiesToList().  This method will only serialize the property if the column is include, and since we have already paired down our data set as a result of our query and paging, there is very little performance impact.  Here is the new PropertiesToList method:

public static ListPropertiesToList(this T obj, List propertyNames)
{
   var propertyList = new List();
   var properties = typeof(T).GetProperties();
   var props = new List();

   //  Find all &quot;&quot; in propertyNames and insert empty value into list at
   //  corresponding position
   var blankIndexes = new List();
   int i = 0;

   //  Select and order filterProperties.  Record index position where there is
   //  no property
   propertyNames.ForEach(name =&gt;
   {
      var property = properties.Where(prop =&gt; prop.Name == name.Trim())
         .SingleOrDefault();

      if(property == null)
      {
         blankIndexes.Add(new NameValuePair(name, i));
      }
      else
      {
         props.Add(properties.Where(prop =&gt; prop.Name == name.Trim())
                                    .SingleOrDefault());
      }
      i++;
   });

   propertyList = props.Select(prop =&gt; (prop.GetValue(obj, new object[0]) ?? string.Empty).ToString())
                                        .ToList();

   //  Add &quot;&quot; to List as client expects blank value in array
   blankIndexes.ForEach(index =&gt;; {
      propertyList.Insert(index.Value, string.Empty);
   });

   return propertyList;
}

You might ask why not just pass in the list of SearchAndSortTable and avoid using reflection again. You could, but remember at this point we have reduced the number of items to the page size of 10, 50 or 100 rows, so your reflection calls will not have that great an impact. Also you should consider whether you want to simply have a function that will select only those properties that you need. Using SearchAndSortable would narrow the scope of utility, as you can use this method in other areas other than prepping data for DataTables.Net.

Now It’s Your Turn

That’s it.  Play with the page named SpecifyColumns.aspx.  You should be able to add and remove columns in the DataTable.Net configuration and they will just work.  This will mean, however, that you’ll have to always define your columns in your aspx page.  But since we worked really hard the first time around, DataTablePager will still be able to create paged data sets for any class in your domain.

Source code is here.  Enjoy.


ActiveEngine Software by ActiveEngine, LLC.