Blog Archive

Home
ActiveEngine

treebeard-the-ent-e1269783419659Organizing data so that you sort, filter or manipulate items efficiently is critical for front end applications.  These days its not enough to have a pretty design – it’s gotta like do some stuff.  Just sayin’.  It would be silly if you had to constantly send requests back to a server just filter them, or sort them in a new order.

Sometimes you might be asked to support hierarchical or nested data.  A perfect example is a project plan: there are levels and nested levels; there is an implied relationship between the parent levels and their children; the order of the list items implies a precession, or a progression.  Consider also thate each list item not only has a task, but also has the duration for that task, and you have to total all the durations for an estimate.  Perhaps your crazy manager asks you to only fetch the tasks where a certain team member works, oh, and by the way, exclude all the tasks that are parent level tasks.  As you can see, there is a need for a structured approach that you can easily re-use.

Simple Nested Structures May Not Be Enough

In general, creating nested data structures is fairly straight forward.  Let’s continue with our scenario of a project plan:


var Task = function (options) {
    var _options = options || {};

    var _description = _options.description || "";
    var _assignedTo = _options.assignedTo || "";
    var _duration = _options.duration || 0.0;

    var _tasks = [];

    return {
        description: _description,
        assignedTo: _assignedTo,
        duration: _duration,
        tasks: _tasks
    };
};

We will represent each project item as a Task that has a description, the team assigned to the task and a duration. The array “_tasks” represents the sub tasks or children of a task. You can indeed represent a hierarchy by simply adding this array and achieving what ever nesting levels that you desire.  This jsFiddle illustrates what that would look like.

While this does the job, how are you going to provide filtering? We need a way to walk the hierarchy; furthermore, we don’t want a one time function for type of traversal. Also, how do we handle the times when we need to affect a change only to items that are two levels deep. Simple recursion will not be enough here.

TreeNodes Can Provide Help

What we need is a data structure that will tell us where we are in the hierarchy of data items; who the parent of an item is, and if there are any children for a particular item. Once we can obtain these, performing operations are much easier, and we can answer questions like “What are all the tasks that over 10 hours” or “what are all the tasks for ‘POC Development’”.  Being able to walk the hierarchy will also give us the capability to automatically calculate the overall hours for a project.

var TreeNode = function (options) {
        var _options = options || {};

        var _value = _options.value || {};
        var _children = _options.children || [];
        var _parent = _options.parent || null;

        //  Public methods
        var isRoot = function () {
            return (_parent == null);
        };

        var level = function () {
            return isRoot() ? 0 : _parent.level() + 1;
        };

        var addChild = function (value) {
            var childNode = new TreeNode({ value: value, parent: this });
            _children.push(childNode);

            return childNode;
        };

        return {
            value: _value,
            children: _children,
            parent: _parent,

            //  public methods
            isRoot: isRoot,
            isLeaf: isLeaf,
            level: level,
            addChild: addChild
        };
    };

Let’s dissect this a bit. A TreeNode will wrap an object; more specifically “_value” will be the container for our Task object. “_children” is an array of TreeNodes that each wrap another Task object. A TreeNode can have a “_parent”, so naturally any item the _children array will “point” back a single TreeNode. This establishes our relationships between tasks and sub-tasks. In some scenarios you have a TreeNode and need to a property for that TreeNode’s parent, and “_parent” offers that convenience.

The function “isRoot()” tells us that we are at the top of a hierarchy. “level()” tells us what level a TreeNode sits at. Looking at this code you’ll note that it relies on “isRoot()” to determine whether to return a 0 or simply increment the “_parent.level”. To create your hierarchy, simple call TreeNode.addChild(new Task);

If you look at the jsFiddle you’ll see that we have a hierarchy established, and the path of “Check Team Schedule” can represented as tasks[3].tasks[0].tasks[2].  If we were to set a variable to tasks[3].tasks[0].tasks[2] we are disconnected from our hierarchy, as the individual items sitting in each “tasks” array are unaware of their parents.  In order for us to start any filtering or querying, we will first have to pull the tasks apart and re-assemble them as TreeNodes.  Luckily we can create a function to build a tree from our Task objects:

var treeFuncs = function(){
    var buildTaskTree = function (sourceTasks) {
        var tree = new TreeNode({
            value: "ROOT"
        });

        var taskStack = [];
        var treeNodeTargetStack = [];

        taskStack.push(sourceTasks);
        treeNodeTargetStack.push(tree);

        while (taskStack.length > 0) {
            var currentTask = taskStack.pop();
            var targetNode = treeNodeTargetStack.pop();

            var length = currentTask.length;
            for (var i = 0; i < length; i++) {
                var task = currentTask[i];
                var childNode = targetNode.addChild(task);

                if (task.tasks.length > 0) {
                    taskStack.push(task.tasks);
                    treeNodeTargetStack.push(childNode);
                }
            }
        }

        return tree;
    };

    return {
        buildTaskTree: buildTaskTree
    };
};

The function buildTaskTree accepts an array of Task objects. Next, we will create a root for our TreeNodes, and all other child TreeNodes will be added to this variable. Instead of using recursion, we will create two stacks “taskStack”, where we will store all the Task objects, and “treeNodeTargetStack” will hold the current TreeNode that will receive new children. Note that we are “seeding” the “taskStack” with the incoming array of Task objects. “targetTreeStack” will have the “tree” variable that is the root of the tree that we are building.

The while loop will always pull from the taskStack; more specifically, the first pass will have the top level tasks “Discuss Product Concept”, “Create Mockups”, “Setup POC environment”, “POC Development” from our project plan. We look at each item and add it to the current position of the tree. If the current Task has children we will want to do two things:

  1. Push the child tasks array onto the “taskStack”. When we execute the loop again, it will pick up this new entry and continue our processing.
  2. Push the newly created child TreeNode onto the treeNodeTargetStack so that we maintain the hierarchy between the tasks. The next time we execute the while loop, we are the current TreeNode and add to its children. This maintains our hierarchy relationships.

That’s it. Each subsequent pass of the while loop drives deeper into Tasks hierarchy as each child is added to “taskStack”. When we’re done, we return “tree” as our TreeNode structure.

How To Put TreeNodes To Good Use

Great. We have TreeNodes. Now what? Well, our goal was to create structure that we could traverse and always know who our parent was, and who our children were, as well as creating the ability to determine what level in the hierarchy we are located.  To find answers to questions that require us to filter, sort, etc we will visit each node on the TreeNode structure, and at each stop we will perform a function that will act as our query and record the result for that node.  When the trip is done we’ll have a subset of our tree.

First the function that walks the TreeNode:


var traverseAndApply = function (treeNode, func, functionArgs) {
        var treeNodeStack = [];
        var rootArray = [];
        rootArray.push(treeNode);

        treeNodeStack.push(rootArray);

        while (treeNodeStack.length > 0) {
            var currentNodeList = treeNodeStack.pop();
            var length = currentNodeList.length;

            for (var i = 0; i < length; i++) {
                var currentNode = currentNodeList[i];
                if (currentNode.parent != null) {
                    func(currentNode.value, currentNode, functionArgs);
                }

                if (currentNode.children.length > 0) {
                    treeNodeStack.push(currentNode.children);
                }
            }
        }
    };

The function traverseAndApply accepts three arguments. The argument “func” is a function, and this is the function that will executed at each node in the TreeNode. “functionArgs” are the items that will be pass into the “func”. The remainder of his function may look familiar – the approach is the same as in buildTaskTree. Again, we are avoiding recursion and are using the stack approach. We push the TreeNode to an array, then push the array onto a stack. As we loop, we pull each TreeNode, fire off “func”, then progress to the next.

So let’s put this to use. Our first example will be to find all tasks that are the top level tasks, in the parlance of our TreeNode, where the TreeNode.level() is equal to 1.


var treeNode = _treeFuncs.buildTaskTree(_tasks);

        var matchTopLevelArgs = {};
        matchTopLevelArgs.results = []

        var matchTopLevelFunc = function(value, node, funcArgs){
            if(node.level() == 1){
                funcArgs.results.push(value);
            }
        };

        _treeFuncs.traverseAndApply(treeNode, matchTopLevelFunc, matchTopLevelArgs);

And now let’s find all of Frodo’s tasks that are not top level tasks:

var treeNode = _treeFuncs.buildTaskTree(_tasks);

        var frodoTasksArgs = {};
        frodoTasksArgs.results = [];

        var frodoTasksFunc = function(value, node, funcArgs){
            if((value.assignedTo.indexOf("Frodo") > -1) & (node.level() != 1)){
                funcArgs.results.push(value);
            }
        };

In our jsFiddle you’ll find working versions of the above filter examples. Experiment around and see what you can come up with.

Future Considerations

Now that we have the basics down you can see that we have a very powerful tool. No doubt you have noticed that the hours for our project don’t make sense, as normally the duration of a task is comprised of the duration of any sub-tasks. Since we can visit each node, we may be able to devise a way to create a function that can calculate the duration for each level appropriately.

Another idea could also be the ability to change a task to sub-task or maybe make a sub-task into a new parent.  Our ability to manipulate the TreeNode and it’s underlying data can go a long for us.  So that’s what we will explore next time.  Until then, play the video below and enjoy some tree music.

This post is in our Javascript Primer series, a collections of articles aimed at .Net and back-end developers who are transition to client-side UX development.  In a previous installment, the topic of publish-subscribe design pattern was featured, and this pattern fostered loosely coupled ViewModels.  It is recommended that you read that post as well, as this edition of our primer builds on those concepts.

A basic tenant to good programming is to break down activities to small components.  Smaller functions are easier to maintain.  In many instances the information that the users are working with is best presented in logical groups, dissected so that an improper decision can never be made.  A rich user interface may force you to break a larger object into several ViewModels.  But guess what?  If you need to break information down, will also need to re-assemble this information to transmit back to mother ship to be stored in your database.

Challenges for the ViewModels

There are several challenges ahead for you if need your ViewModels to communicate, and you want to retain the agility provided to you with loose coupling.  If you recall the architecture we established with the publish and subscribe design pattern, all our “pieces” are highly independent from one another.  But we now have a complication.  Independence also means blissful ignorance.  Each ViewModel only knows how to contribute their small portions of information for glorious “mother object”.  The best it can is yield up what it has and somehow, someway, something will piece things together.

Let’s consider a code example:

We have simple project tracker that has a name, a roster of workers and a calendar that displays each workers start date.  Each tab is serviced by a ViewModel.  The “Project Info” tab shares the project name information with “Team”.  The tab “Team” shares the project team data with both the “Project Info” and the “Timeline” tabs.  This is accomplished using the publish and subscribe pattern that was detailed in this post.  You should review that post is your are unfamiliar with postaljs or with the publish and subscribe design pattern.

The challenge here is that we want a “Save” button that will easily gather the data from each tab and save it our project object.   We also want this controlling activity to be flexible and allow each ViewModel to supply the data needed without too much orchestration: in other words, we just ask the ViewModel “What do you have for a project object”, and each ViewModel fulfills its responsibility.

Pipe and Filter Pattern For Chaining Events or Commands

Many of you with a background in server side application development may recognize the pipe and filter pattern.  Simply said, it is a way to chain a set of operations that are to be performed in succession.  This diagram is a depiction of the pipe and filter:

PipesAndFilters

The “Filter” is a function that performs one thing.  Our pipe is the “orchestrator” or directory in that it will forward to each filter as ordered.

For our purposes we will continue to use postaljs as our communication mechanism.  We’ll need a pipeline that will keep track of which filter to execute and forward data to each subsequent filter of the process chain.

var pipeline = {
  index: 0,
  filters: ["edit.getViewOneData","edit.getViewTwoData","edit.finalDestination"]
 };

The property “filters” contains the names of the postaljs topics that will kick off a function.  In this case we will need 3 subscriptions that listen for items listed in the “filters” array.  Each process step, or filter, will be executed in the order that listed in the “filters” array: in this case, “edit.getViewOneData” will be executed first.  The property “index” tracks which filter is being processed.  As each filter is executed, it will increment index, then use index to access the next topic in the chain of events.  Starting off the process is accomplished in this fashion:

// Now start up the pipeline, fire off the first step that will execute the first filter
 postal.publish({
  channel: "pipenfilter",
  topic: pipeline.filters[pipeline.index],
  data: pipeline
 });

//  A sample subscription.  This will be executed first since it is listed first in the pipeline.filters array
postal.subscribe({
  channel: "pipenfilter",
  topic: "edit.getViewOneData",
  callback: function (pipeline, env) {
    fetchViewOneData(pipeline);
}
 });

//  After filter performs, it needs to call the next filter in the chain<
var fetchViewOneData = function(pipeline){
  // ... perform activities with some data

  //  To forward to next step, increment the index
  pipeline.index++;

  //  Now forward to the next filter
  postal.publish({
    channel: "pipenfilter",
    topic: pipeline.filters[pipeline.index],
    data: pipeline
  });
};

For our particular scenario with the project UX above we will have to do a few simple things:

  1. The ViewModels “ProjectInfo”, “Budget” and “Team” will need subscriptions to a “pipenfilter” channel
  2. Each ViewModel will need a corresponding filter function, so the “Team” ViewModel will get a “fetchTeamFilter”
  3. Add an endpoint in our $(document).ready section to act as a controller for the pipeline process.  Since we don’t have a controller this logic can sit here.  Naturally for cleaner production code you’ll want to be a bit cleaner and place this in a controller object.

All of the addition changes are noted with a “New pipe and filter” comment, so you can search through the Javascript to see where the changes have been made.  Most of the updates are just a few lines, and the nice thing is that the code to forward to the next filter is the same each time.  Not much exciting stuff to be seen other than incrementing the index and passing the data to the next topic.

Where No One Has Gone Before (just sayin’)

Now that you can synchronize your data, assemble it back into a proper object, you can begin your next journey.  There’s many long term benefits here that you will begin to realize.  What if you need to add an additional ViewModel, or what if you need to change which ViewModel display the groupings of data?  Without our pipe and filter solution, you would need to change a lot of things.  With what we have described here, the most that have to do is to add an additional filter name to the pipeline list and make sure that your additional ViewModel functioned properly.  The other ViewModels are left untouched.

Another benefit is also be that you can control what functions are fired while allowing the ViewModels to do their thing.  What if you had a ViewModel that published changes to other ViewModels after it finished performing calculations.  By ordering the filters in the pipeline accordingly, you can allow the ViewModels to do that hard work and be assured that all aspects of your data is up to date.  Doing more with less code means you can test and maintain this code with confidence.

This is a continuation of the Javascript Primer series where we focus on topics essential for .Net or back-end developers or who are transitioning to Javascript and client side development.  Many of the topics covered in this series will be related to producing web apps running in the browser.  Some design patterns and practices are also application to NodeJS / server-side Javascript apps as well.

Keeping IT Together

One complaint that many have when embarking on applications of medium complexity is that organizing your code is difficult; indeed, even with frameworks you can still end up with giant god object that contains all data-bindings, algorithms, helpers methods.  If you take the attitude that Javascript is a second class, toy language where you simply store jQuery animation methods and AJAX calls, you quickly end up with files that can be 3-4000 lines long!  Maintaining that is beyond torture.

A common design pattern used to alleviate this issue is the Model View ViewModel (MVVM) pattern.  The goal of MVVM is to divide up the logic of your application into UX, business logic, and data-binding activities into discrete components.  KnockoutJS is a popular framework that supports this approach.  Addy Osmani’s “Design Patterns for Javascript” defines the primary components of MVVM, and it is great primer to read if you are unfamiliar with these terms.  For our purposes:

Model will contain the data elements, such as FirstName, LastName, etc.  Essentially this is a building block for your front-end that will “model” entities such as user, customer, project, etc.

ViewModel will contain the behavior for the UI.  This can be business logic, formatting of data, and bridge the gap between data coming from a server and what is useful for the user to interactive with.  The ViewModel will also use the Models.

View will be the html document.  This will also contain the directives for what to present to the user, and will interact with the ViewModel.

We Still Have A Problem

So even with the MVVM pattern at our disposal we still have to fight bloated ViewModels.  From here out we will use a sample project for our discussion where we want to maintain project information comprised of project name, project description, and project notes, team members and a simple timeline / calendar function.  Sounds straight forward enough.  On our UI we will have a tab for grouping of information, and as information changes on a tab, we want that information available in each subsequent tab.  Again, nothing too earth shattering here.  It should look something like this JSFiddle below.  (NOTE:  For the sake of this exercise all code has been placed in the Javascript section of our JSFiddle.  You should split your code into modules so they can easily managed).

Each tab on this screen has logical groupings of information.  For the sake of argument let’s say that we decide to implement this UI with just one ViewModel for all three tabs.  For databinding we will be using KnockoutJS,  Starting out we have just one ViewModel – ProjectViewModel – that handles all updates; more specifically, we can take advantage of the data-binding facilities here, so when a team member is added, we can detect this change and automatically execute and update to the calendar.

We have three tabs – “Project Info” is pretty simple, and it uses the computed observable teamSize to display the count of the project team. You note that any changes to “Project Name” will be reflected on the “Team” tab.   The tab Team allows you to add a team member.  To support updating the calendar with the team member start date we have create subscription to the projectTeam observable array.  The tab “Timeline” has a calendar that displays the team member start date, and this is updated each time we add a new team member.  Also, any changes to the number of team members will be reflected on the first tab, “Project Info”.

Indeed this works well, since we have the advantage of one ViewModel and can benefit from easily sharing one set of data between all three tabs, But we still have an issue should we start adding more functionality and increase the properties and functions in the ProjectViewModel.  Soon maintaining the ViewModel will become difficult, and testing discrete portions of functionality will rapidly become difficult.  We need to be able to isolate or reduce the impact that the alterations made to the ViewModels have on each other.  This means adhering to the principal of “loose coupling”.

So what does it mean for us to create a ViewModel per tab?  The biggest issue now will be communicating between each ViewModel, and as always, our mantra should be “Keep it simple.  Please.  Pretty please”. So how do we achieve communicating data updates without creating three ViewModels that are still highly dependent on the others?  Our solution is to implement a subscribe and publish message architecture.  While this sounds intimidating, this simply means we will create each ViewModel with a method – a “subscription” – that will wait for a message that has the data we need.  As a corollary, each ViewModel need a way to “publish” any changes to data that occurs.

Super Simple Message with PostalJS

PostalJS is a very simple messaging framework that facilitates sub / pub messaging in a unique way.  In particular, PostalJS allows us to set up a subscription that remains “ignorant” of how the message is generated.  There is no “pre-registration processes” to map out a route between ViewModels; on the contrary, your Javascript object simply declares that will update when a subscription arrives.  This also means that several ViewModels can subscribe to an event and the publisher doesn’t care.

Here is a brief sample taken from github:

var subscription = postal.subscribe({
        channel: "orders",
        topic: "item.add",
        callback: function(data, envelope) {
            // `data` is the data published by the publisher.
            // `envelope` is a wrapper around the data & contains
            // metadata about the message like the channel, topic,
            // timestamp and any other data which might have been
            // added by the sender.
        }
    });

postal.publish({
        channel: "orders",
        topic: "item.add",
        data: {
            sku: "AZDTF4346",
            qty: 21
        }
    });

Looks pretty cool.  And simple.  So for us, our ViewModels will remain ignorant of each other, and only are aware that someone, something has sent data to it.  Another nice by product is that should one of the ViewModels fail, the impact is greatly reduced.  So should the ViewModel with the calendar fail, the other tabs could still collect data and function.  Better yet, adding new features to a ViewModel is going to be easier to achieve since we can test in isolation, and if the new features do not interrupt our sub / pub messaging, we can feel confident that new changes can be rolled into production without bringing down the entire application. Let’s look at the results, then examine the changes to our app’s architecture.

A Brave New World

If you open the Javascript tab of the JSFiddle, you’ll see that we have broken up the ProjectViewModel into 3 ViewModels:  TeamViewModel, TimelineViewModel, and of course the remants of ProjectViewModel.  We’ll take the updates on at time, focusing on the feature in the UI and work back to the code.

Project Name As before, entering the project name in the “Project Info” tab updates the title in the “Team” tab.  Same as before, the update happens when the input box loses focus.  Now let’s focus on the code.  Open up the Javascript in the JSFiddle, and you’ll see some drastic changes.  In the ProjectViewModel we have created a Knockout trigger that publishes a message:

</pre>
<pre data-language="js">_projectName.subscribe(function(newValue){

        //    Now use PostalJS to push message out
        postal.publish({
            channel: "project",
            topic: "edit.projectNameChange",
            data: {
                projectName: _projectName()
            }
        });
    });

So what we are doing here telling Knockout “Anytime there is a change to the observable _projectName, fire off a function that publishes a piece of data with the new value of _projectName”.  That’s it.  ProjectViewModel has fulfilled its responsibility and told anyone who will listen that a change has taken place.  Now find the TeamViewModel object.  Here we have a subscription that waits for a message containing _projectName updates:

</pre>
<pre data-language="js">var projectNameChangeSubscription = postal.subscribe({
        channel: "project",
        topic: "edit.projectNameChange",
        callback: function(data, envelope) {
            _projectName(data.projectName);
        }
    });

We get the data, and merely update an observable on the TeamViewModel.  It should be apparent by now that neither ProjectViewModel nor TeamViewModel “know” about each.  ProjectViewModel says “_projectName update ready” and who can handle it, does so. TeamViewModel just grabs the update, uses it.  That’s it.  The messaging works by creating a “channel” – in this case “project” – further defines the message with a topic – “edit.projectNameChange”.  This means you can a single channel that support multiple topics and narrowly define your event messages.

ProjectTeam Update Play around with the “Team” tab and add some team members with different dates.  The operates just as before.  Under the hood, the code in TeamViewModel that handles the publishing is:

</pre>
<pre data-language="js">_projectTeam.subscribe(function(){
        //    this is how postaljs publishes events
        postal.publish({
            channel: "project",
            topic: "edit.teamUpdate",
            data: {
                projectTeam: ko.toJS(_projectTeam)
            }
        });

    });

As before, we are asking KnockoutJS to watch for changes to the _projectTeam array, and when a change occurs, we publish a different message.  The portion of code – topic: “edit.teamUpdate” distinguishes this event from the previous on.  We simply take our observableArray and convert it a plain Javascript script array and throw out to whoever wants it.  If you look in both ProjectViewModel and TimelineViewModel you’ll see subscriptions that handle this topic.  This brings us to another important aspect of messaging and postaljs’ strength:  should we want to add more ViewModels that need _projectTeam updates, we only need to subscribe to the topic “edit.teamUpdate”.  We could subscribe 50 more times if we wanted.  The publisher / source ViewModel does not need any alteration, and this achieves our goal of loose coupling.

What Are The Other Benefits?


Looking at our ViewModels, you may have noticed that we are exposing less in the return statements, and with each ViewModel handling less functionality they become simpler to read and maintain.  Imagine that the needed to support 20 different data entry input boxes – one monster ViewModel would have a huge return statement.  It makes it hard to keep things straight in one “god” class.

With PostalJS and event messaging we no longer need to expose an entry point in our objects for data to be passed.  The subscriptions allow the ViewModel to handle receiving update messages internally.  A quite common scenario would be responding to an AJAX update.  With PostalJS you can avoid calls to each ViewModel when new data arrives from the server.  Simply publish to the appropriate topic and let the ViewModels respond as they need.

“This. Is Not. For. Practice!” – a phrase used to emphasize that you must get things right.  Mentors have used it to mean “Get serious, now!”.

For years jQuery has been a tremendous productivity boost for client-side development; so successful, that it gave rise the Single Page Application (SPA) where a good portion of your application logic was now written with Javascript. Soon, so soon, we all learned that the increased complexity of our client side solutions required good organization, a better architecture.  Time to get serious about clean code.

If you are .Net developer, then many times you can fall into the ECMAScript trap: all ECMAScript must be the same.  What I know from C# must be applicable to Javascript, ’cause it looks the same. Sad news, it’s not. Javascript executes in a much different environment. And to make things more confusing, close cognates such as this are not close enough. “this” in C# is a reference to the current object. “this” is Javascript is , well, it all depends. “this” in Javascript changes context dependent on many factors.  A good reference is here at Mozilla.  Let’s create a quick summary, and where relevant we’ll talk relevant areas that impact programming practice:

Global Context

When you are outside any function, this is referred to as the global context.  ”this” will refer to document.  So if you fire up the console and perform this test:


console.log(this == document); // true.

And in web browsers this also equals the window.  So:


console.log(this == window); // true.

Yikes!  Wait there’s more.  Try this little piece:


var yikes = "ouch";

console.log(window.yikes); // ouch

window is our global memory space.  So think back to any client side code you have written, and this should be very familiar to you:


<script type="text/javascript" src="scripts/jquery.min.js"></script>

This loads jQuery into the global memory space.   Go look at window and you will see “window.$” and “window.jQuery”.  You guessed it, jQuery is now loaded into the global namespace.

This may seem like a digression, but we need to pause at this very important point.  You must be really careful what you put into this global space.  As you grow in Javascript capability you will eventually develop your own utils or libraries.  So consider this very innocent utility file:

  function objectToArray(sourceObj) {
    var results = [];

    for (var key in sourceObj) {
      if (sourceObj.hasOwnProperty(key)) {
        var newObj = sourceObj[key];
        newObj.objectKey = key;
        results.push(newObj);
      }
    }

    return results;
  };

function localize(t) {
  var d = new Date(t + " UTC");
  return d;
}

function parseDecimal(d, zeros, trunc) {
  if (d == null)
    return 0;

  if (typeof d == "number") {
    return parseFloat(d);
  }

  d = d.replace(/[a-zA-Z\!\@\#\$\%\^\&\*\(\)\_\+\-\=\{\}\|\[\]\\\:\"\;'\<\>\?\,\/\~\`]/g, "");
  while (d.indexOf(".") != d.lastIndexOf("."))
  d = d.replace(/\./, "");

  if (typeof zeros == 'undefined' || zeros == "") {
    return parseFloat(d);
  }
  else {
    var mult = Math.pow(10, zeros);
    if (typeof trunc == 'undefined' || (trunc) == false)
      return parseFloat(Math.round(d * mult) / mult);
    else
      return parseFloat(Math.floor(d * mult) / mult);
    }
};

All of these functions are now attached to the global namespace, our what is called “polluting the global namespace”.  Most likely a utility class will have 20 – 30 functions, not the mere three listed above.  These functions will remain in memory until the user refreshes or navigates away from the page.  With a small app, this will be no problem, but anything with a moderate degree of code will soon balloon and consume memory.  And in some cases, such as a mobile friendly websites, you want to conserve as much memory as you can, only loading the objects that you need then allowing them to be garbage collected.

Wow – we’ve only covered “this” in a global context, and if your head isn’t a little numb, don’t worry, do another shot and see if it gets worse!

Function Context – Simple Call

Here we get a strange, as this is the rule:  ”this” will depend on how the function is called.  Consider this code:


var myFunc = function(){
   return this;
 };

myFunc() == window;

Since we are not using strict mode “this” must refer to something so it will default to the global object.  Changing the code to:


var myFunc = function(){
  "use strict";
  return this;
 };

myFunc() == "undefined";

Function Context – As An Object Method

The rule here is “this” refers to the object that the method is called on.  For example:


var mathDoer = {
  someNumber: 13,

  addSomeNumber: function(addMe){
    return this.someNumber + addMe;
  };
 };

mathDoer.addSomeNumber(7); // returns 20

You’ll note that both mathDoer.someNumber and mathDoer.addSomeNumber() are public in scope.  When first developing in Javascript it may be tempting to create your objects as depicted below, and while it is legitimate, it will lead to clutter very quickly:


var mathDoer = {};

mathDoer.someNumber = 13;

mathDoer.someArray = [];

mathDoer.addSomeNumber = function(){ ...};

Imagine this is a none trivial object, and you will have a cluttered mess.  Also, all your methods and properties are public, which you may not want to expose.

Function Context – As A Constructor

When used in a constructor “this” will refer to the object that is being instantiated.  This is best illustrated with the following:


var mathDoer = function(){
this.someNumber = 13;

this.addSomeNumber = function(addMe){
    return this.someNumber + addMe;
  };
 };

var mathIsFunObj = new mathDoer();

mathIsFunObj.addSomeNumber(7); // returns 13

Many of you may be coming to Javascript via KnockoutJS, a data binding library that provides two way binding between the DOM and Javascript objects.  Many of the KnockoutJS samples use this “style” for creating ViewModels:


var mathDoerViewModel = function(){
  this.justAnInt = 5;
  this.someNumber = ko.observable();

  this.addSomeNumber = function(addMe){
    return addMe + this.someNumber();
  };
};

var vm = new mathDoerViewModel();
ko.applyBindings(vm);

Again, our observables and functions are public in scope.  If you have a separate file for this ViewModel and simply include it with the HTML your “vm” variable is now added to the window object and part of the global memory space.  This can be considered good or bad.  From a diagnostics perspective, you can sit at the console and see what our ViewModel is doing by simply issuing commands like “vm.someNumber(45);”  Good for debugging.  This is also considered bad by others for two reasons: everything is public, and this can lead to bloated objects that are hard to use; and, you have added yet another item to the global memory space.  Imagine you have several ViewModels for a single page.  Things can get confusing very quickly.

So how can you make items private?  Remember that the scope of “this” when used in a constructor is a reference to that object – in our case that object is mathDoerViewModel.  If you were merely to declare a variable in the constructor, the scope of that variable would be local to the constructor.  This “hides” the variable from contexts that are outside the scope of the constructor.  That would look like this:


var mathDoerViewModel = function(){
  var _justAnInt = 5; //this makes the variable private

  //  function is local in scope
  var privateFunction = function(){

    //  our locals are available as well a methods and properties from the object ("this")
    return _justAnInt + this.someNumber();
  };

  this.someNumber = ko.observable();

  this.addSomeNumber = function(addMe){
    return addMe + this.someNumber();
  };
};

From the console still can interact with mathDoerViewModel.someNumber() and mathDoerViewModel.addSomeNumber(). We also have private functions and variables now: _justAnInt and privateFunction. For the .Net developer these conventions should have more familiar feel.

When you scour the web for Javascript code, you’ll find that many of the Javascript heavies will use a different standard for variable / scope hiding. We touch on that briefly here before we move on.  In Javascript, your constructor returns a reference to the object you are creating.  In other words, your constructor will return a version of “this” that refers to the objects properties and functions.  However, your constructor can also return an object.  This gives you great flexibility, and makes the  “Module Reveal Pattern” possible.  First here is what our ViewModel would look like using the Module Reveal Pattern:


var mathDoerViewModel = function(ko){

  var _justAnInt = 0;
  var _someNumber = ko.observable();

  var privateFunction = function(){
    return _justAnInt + _someNumber();
  };

  var addSomeNumber = function(addMe){
    return addMe + _someNumber();
  };

  var setJustAnInt = function(value){
    _justAnInt = value;
  };

  var showMeTheInt = function(){
    return justAnInt;
  };

  //  We return an object that refers to items we want to be public
  return {
    someNumber: _someNumber,
    addSomeNumber: addSomeNumber,
    showMeTheInt: showMeTheInt
  };
};

Looks a little different. In our constructor we are creating local variables and local functions, and in fact our observable – _someNumber – is now a local variable. But we “reveal” the items that we want to be public with the “return” statement. So in this case, we return an object with someNumber that references _someNumber, and references addSomeNumber. As a result, we can create:

  var vm = new mathDoerViewModel(ko);

  // Set value of someNumber
  vm.someNumber(35);
  vm.addSomeNumber(5); // returns 40

  //  Let's use our setter
  vm.setJustAnInt(12);
  vm.showMeTheInt(); // returns 12

Notice that we are passing in a “ko” variable in the constructor call. This serves many purposes: firstly, it makes the ViewModel modular, and allows us to load the module asynchronously with AMD tools like requirejs or curljs; secondly, the constructor signature lets us know that we are depending “ko” to make our ViewModel work; finally, it gives us a tiny speed boost as now that “ko” is local, since Javascript does not have to search up the stack to find it.

You should avail yourself of the fantastic book “Learning Javascript Design Practises” by Addy Osmani.  He’s been kind enough to place a copy online.  This will help you understand some of the code that is that would otherwise seem very strange compared to what you are used to in C#.

Function Context – As A DOM Handler

Moving on the DOM, “this” takes on a different context.  You’ll most likely be familiar with this common jquery statement:


$(".apparea").on("mouseover", ".milestone-border", function (event) {
//   here this means the DOM element with a class of "milestone-border"

$(this).addClass("selected-children");
 });

The rule here is that “this” means the DOM object that fires the handler.  So in this case “this” is equal to event.currentTarget.

Go Forth And Create

ImageBy now your head should hurt a little, with all these nuances it’s wonder that you don’t feel more beaten up.  That’s where the complexity of web development is under appreciated.  Hopefully we have covered enough here for you to save some time debugging, designing, or trying to understand some the vast Javascript frameworks, utils and libraries that are out there.  If you are coming from .Net and are new to Javascript, you avoid the trap that many have fallen into of creating monstrous Javascript objects of pure public functions and properties jammed into a single file.  This will lead to a deep valley of frustration when it comes time to refactor your code after you realize you have a hard time maintaining things.  If you can keep your code modular and have things reside in separate files, you’ll be better off.

We covered “this” in order to reveal some deeper yet vital nuances to Javascript.  Mastering “this” and the related topics above are going to help.  A lot.  This brings Javascript from being a “toy” to  really productive platform because you can avoid some pitfalls that slow you down.  In future posts we continue to build on this foundation where we focus on modular design and asynchronous loading.

For those of you who may not know, DataTables.Net is a fantastic jQuery plug-in that creates a data grid presentation and offers support for filtering and server-side paging solutions. Yep, define an endpoint web service, pump down your data and you are good to go. But the cool kids these days are into the No-SQL thing, and one of the great entries into the document based database arena is RavenDB. With Raven, you define your domain objects and just store them in the database, as Raven will do it’s magic and persist your objects as documents. Have a List<Customer> to store, give it to Raven and it will create one document of the customers and store it in JSON format.

This post will show you how to combine the front end goodness of DataTables with the back end magic of RavenDB. The goal is to provide:

  • The ability to define a single class for that indexes data.
  • Control how what data is selected by define the columns, sort order and paging size in Javascript. In other words, DataTables will tell the server what it wants to pull back
  • Provide support for filtering properties with a single search field, ala Google style.
  • Above all, save you time, make you a hero in front of your fans. :)

For the Impatient, Here’s the End Product

There’s a lot ground that we’ll cover but for those who want to see the light at the end of the tunnel here what the end solution will look like. You may want to download the solution first so you can follow along in the code. First, your web service or controller will have the following:


[HttpPost]
public JsonResult GetTenants(string jsonAOData)
{
   var tenantPager = new DataTablesPager<Tenant, Tenant_Search>(DocumentStore);
   var results = tenantPager.PrepAOData(jsonAOData)
                 .FilterFormattedList();

   return Json(results);
}

// The core method that is used to get the data is in DataTablesPager.cs
public List Filter(int pageSize, int pageIndex, string[] terms)
{
   var targetList = new List();
   RavenQueryStatistics stats;

   using(var session = docStore.OpenSession())
   {
      if (terms[0].Length > 0)
      {
         targetList = session.Query<DataTablesReduceResult, TIndexCreator>()
                               .Customize(x => x.WaitForNonStaleResults())
                               .SearchMultiple(x => x.QueryFields, string.Join(" ", terms), 
                                                options: SearchOptions.And)
                               .Statistics(out stats)
                               .Skip(pageIndex)
                               .Take(pageSize)
                               .As()
                               .ToList();

         this.totalDisplayResults = stats.TotalResults;

         session.Query<DataTablesReduceResult, TIndexCreator>()
                 .Statistics(out stats)
                 .As()
                 .ToList();
         this.totalResults = stats.TotalResults;
}
// Code reduced for reading purposes.

Take note of the Generics on the constructor – Tenant is your domain object, Tenant_Search is the class that Raven will use to create the index for retrieving the data, as well as defining what properties you can filter on the object. We’ll cover indexing shortly along with some RavenDB background.

Your Javascript will be the following:



var otable;

$(document).ready(function(){
   otable = $("#tenantTable").dataTable({
               "bProcessing": true,
               "bSort": true,
               "sPaginationType": "full_numbers",
               "aoColumnDefs": [
               { "sName": "Name", "aTargets": [0], "bSortable": true, "bSearchable": true },
               { "sName": "Agent", "aTargets": [1], "bSortable": true, "bSearchable": true },
               { "sName": "Center", "aTargets": [2], "bSortable": true, "bSearchable": true },
               { "sName": "StartDate", "aTargets": [3], "bSortable": true, "bSearchable": true },
               { "sName": "EndDate", "aTargets": [4], "bSortable": true, "bSearchable": true },
               { "sName": "DealAmount", "aTargets": [4], "bSortable": true, "bSearchable": true }
               ],
               "oLanguage": {
               "sSearch": "Search all columns:"
            },
               "aaSorting": [[1, "asc"]],
               "iDisplayLength": 7,
               "bServerSide": true,
               "sAjaxSource": "GetTenants",
               "fnServerData": function (sSource, aoData, fnCallback) {

                      var jsonAOData = JSON.stringify(aoData);

                     $.ajax({
                       //dataType: 'json',
                       contentType: "application/json; charset=utf-8",
                       type: "POST",
                       url: sSource,
                       data: "{jsonAOData : '" + jsonAOData + "'}",
                       success: function (msg) {
                         fnCallback(msg);
                       },
                       error: function (XMLHttpRequest, textStatus, errorThrown) {
                         alert(XMLHttpRequest.status);
                         alert(XMLHttpRequest.responseText);

                       }
                     });
         }
   });

   otable.fnSetFilteringDelay(1000);
});

It’s actually longer than the .Net stuff!!! We’ll cover what this means as well.

Getting Data and Providing Search with RavenDB

This post assumes that you have installed RavenDB, can connect to it, know how to store your objects, and that you can perform queries with LINQ. There are some good tutorials such as Rob Ashton’s video introduction to Raven, as well as a brief overview by Sensei (me). We’re going to focus on Raven’s inherent full-text search capability and rely on Raven’s built in paging mechanism to help us achieve our goals. While there is great capability that Raven provides, it is not SQL, and much of what you know about LINQ and LINQ to SQL will help you as well as paint you into a corner at the same. We’ll cover those aspects too.

First off, RavenDB is built on top of the search engine Lucene.Net. It is a schema-less database so up front we will need to identify what how we want to fetch data, as they indexes provide super fast data retrieval. Raven Indexes reduce the need to devote huge cpu cycles to processing a query, as the index is built from the documents and processed as a background operation. This operation is asynchronous and is performed by Lucene. Without this approach, any query force a complete scan of all documents. A miserably slow scan. With indexes define up front, Raven will work quitely to keep the indexes up to date when new documents are created. So why is this important? Well, what you think are doing in LINQ:


var search = "Bonus";
var steps = session.Query()
                    .Customize(x => x.WaitForNonStaleResults())
                    .Where(x => x.State.StartsWith(search) || x.WorkflowName.StartsWith(search))
                    .ToList();

 

is really translated to Lucene syntax. So what we get in the end is State:Bonus OR WorkflowName:Bonus. While it is simple to write a query that includes all the properties of an object, if you had an object with 15 properties would you really want to create a monster statement with a ton of ||’s? Hell no! If you look in the TestSuite project of the source code there is a few example of using pure LINQ queries. Check out the method “CanFilterAccrossAllStringProperties” and you will see where things were headed.

We want to be like Fonzie, and what was Fonzie? Correctomundo – he’s cool. A good solution would be to know what properties a domain object had, and perform a filter against those properties. In other words, it would really helpful if we could write some code that would look like this:


var propertyFilterSteps = session.Query()
                                 .Customize(x => x.WaitForNonStaleResults())
                                 .Where(AllStringPropertiesFilter(search,
                                      "Answer,AnsweredBy,
                                      Id,Participants,WorkflowName,WorkflowType,"))
                                 .ToList();

Here we are using a Expression<Func<T>> to pass in a delimited list of property names and with a little LINQ query against the class Step, we can generate a Lambda to process of filter. This is in the test method “CanFilterAccrossAllStringProperties”. It worked great until we needed to include DateTime properties. The code is included in the project, so you look at it there.

So how do we achieve the goal of have a single text box driven search that will query across all type of properties, and when you type “Spock 2010″ will query the properties that you specify for both values of “Spock” and “2010″? Let’s go back to Raven as you can specify an index query by mapping what properties you want to be included in the index and how you want Raven / Lucene to parse the text for matching values in that index. Raven provides a class called “AbstractIndexCreationTask” where you define a Map / Reduce function to create your index. In other words you are picking which properties are included in the index. You can define the output of the Map to anything that you wish. This is held in a class that we’ll name ReduceResult and we will query against the properties of that class. We want to tell Raven to take the significant properties and index them in a format that we can match our terms against. So we will create the following index that will allows us to filter for any term. This is found in Step_Search.cs in the Index folder


public class Step_Search : AbstractIndexCreationTask<Step, Step_Search.ReduceResult>
{
	public class ReduceResult
	{
		public string[] QueryFields { get; set; }

	// ... code eliminated for reading purpose
	}

	public Step_Search()
	{
		Map = steps =>
		from step in steps
		select new
		{
			QueryFields = new [] { step.State, step.Answer, step.AnsweredBy, step.WorkflowName,
			step.Created.ToShortDateString(), step.Created.Year.ToString(),
			step.Created.Month.ToString() + "/" + step.Created.Year.ToString()},
			DateCreated = step.Created,
			WorkflowName = step.WorkflowName,
			State = step.State
		};

	Indexes.Add(x => x.QueryFields, FieldIndexing.Analyzed);

// ... more code eliminated for reading purposes

So what we have done is create an index that has an array of strings. This array holds the text of our properties that we will match against. Raven has a method called Search that will perform a “StartsWith” style match against each object in the array. The call is .Search(x => x.QueryFields, “string to be searched”). If you take a look at the index we have done some additional things with dates: for one, we create a string representation in ShortDate format. So when the user knows the exact date they can enter it and the Pager will match it. But we want to make things as easy possible, so we have created string representations in mm/yyyy format so it’s easy for the users to filter if they only know the month and year of the item they are looking for. “I think it was April last year …”. This is a big for those users who don’t recall exact details, as it allows them to quickly discover what they are looking for.

One last thing before we move on to making this work with DataTables. Raven provides the search method that works with the IRavenQueryable collection. Take a look at the DataTablesPager.Filter method and you will see a SearchMultiple method. This is put in place to perform a search for multiple terms. In other words it will search for “Spock” and then chain a search for “2010″ against the IRavenQueryable. Phillip Haydon came up with this approach that works with partial matches, as it will give Lucene the right syntax. Otherwise you end up with weird results because you feed Lucene “spoc 201″ and because of the tokens Lucene creates with the text analyzer it will not pick up what you need. Phillip’s brilliant approach bridges this gap by using an extension method to perform the chaining of search terms. This is found in the class RavenExtensionMethods.cs, and it basically tokenizes the search string, creates an array and an individual call to the Search() method for member of the array. It allows us to perform advanced filtering such as partial matches like “spoc 201″. Try this out in the Tenant.aspx page of the WebDemo solution are you’ll see how it works.

Outta Breath Yet? Let’s Talk DataTables.Net!!!

Breathing hard yet? Good! There’s more to do – how does this work with DataTables.Net? DataTables uses the following parameters when processing server-side data:

Sent to the server:

Type Name Info
int iDisplayStart Display start point
int iDisplayLength Number of records to display
int iColumns Number of columns being displayed (useful for getting individual column search info)
string sSearch Global search field
boolean bEscapeRegex Global search is regex or not
boolean bSortable_(int) Indicator for if a column is flagged as sortable or not on the client-side
boolean bSearchable_(int) Indicator for if a column is flagged as searchable or not on the client-side
string sSearch_(int) Individual column filter
boolean bEscapeRegex_(int) Individual column filter is regex or not
int iSortingCols Number of columns to sort on
int iSortCol_(int) Column being sorted on (you will need to decode this number for your database)
string sSortDir_(int) Direction to be sorted – “desc” or “asc”. Note that the prefix for this variable is wrong in 1.5.x where iSortDir_(int) was used)
string sEcho Information for DataTables to use for rendering

Reply from the server

In reply to each request for information that DataTables makes to the server, it expects to get a well formed JSON object with the following parameters.

Type Name Info
int iTotalRecords Total records, before filtering (i.e. the total number of records in the database)
int iTotalDisplayRecords Total records, after filtering (i.e. the total number of records after filtering has been applied – not just the number of records being returned in this result set)
string sEcho An unaltered copy of sEcho sent from the client side. This parameter will change with each draw (it is basically a draw count) – so it is important that this is implemented. Note that it strongly recommended for security reasons that you ‘cast’ this parameter to an integer in order to prevent Cross Site Scripting (XSS) attacks.
string sColumns Optional – this is a string of column names, comma separated (used in combination with sName) which will allow DataTables to reorder data on the client-side if required for display
array array mixed aaData The data in a 2D array

DataTables will POST an AOData object. The class DataTablesPager.cs will handle parsing this object with the method PrepAOData. It’s responsible for determining what properties we are querying, how the data will be sorted, paging size, as well as including any terms for filtering. Because we have used generics, PrepAOData doesn’t care what object in your domain you are using as it is designed to read the properties and match those properties against the list of data items that DataTables has sent up to our application on the server. Again, our goal is to let DataTables dictate what to look for, and as long as we did our work when we created the index we should have great flexibility.

Let’s look at the Javascript again:

"aoColumnDefs": [
{ "sName": "Name", "aTargets": [0], "bSortable": true, "bSearchable": true },
{ "sName": "Agent", "aTargets": [1], "bSortable": true, "bSearchable": true },
{ "sName": "Center", "aTargets": [2], "bSortable": true, "bSearchable": true },
{ "sName": "StartDate", "aTargets": [3], "bSortable": true, "bSearchable": true },
{ "sName": "EndDate", "aTargets": [4], "bSortable": true, "bSearchable": true },
{ "sName": "DealAmount", "aTargets": [4], "bSortable": true, "bSearchable": true }
],

In the server side application we have Tenant_Search.cs that has created an index with the properties Name, Agent, Center, StartDate, EndDate and DealAmount. The Javascript above is DataTables way of saying “Hey, server, give me information back in the form of an array of value pairs and by they way, here are the data columns to use when you get me my stuff.” On the server side, we don’t care what the order of columns in the grid will be as the server assumes that DataTables will take care of that. And indeed, DataTables is supplying the request in the sName value pair. The server fetches it, spits it back to the browser and DataTables munches it. You can change the Javascript and leave your server application alone as long as you stick to using the fields you included in your index. Just like Fonzie, be cool.

But even cooler is the fact that Raven will handle paging for us: it has a built in limit of return up to 128 documents at a slice. Given that it’s retrieval speed is very fast this will work very well for us. If you look at the Raven console for each page that you retrieve you will see a very low time for the fetches. Remember, there is very little processing for the query as it is the index that has already performed the heavy lifting. For an example of this the page Tenants.aspx in WebDemo solution will page and filter 13,000 + documents. It is lightning fast.

Has Your Head Exploded Yet?

This is a lot to digest. Source code is here, along with the means to create 13,000 documents that you can use for testing. Please note that you will be required to pull down the Raven assemblies/packages via NuGet. Otherwise the download would be about 36 MB. Work for responding to sorting request has been started and hopefully you’ll want to see how that’s is solved in future post. But what we have accomplished is a very robust and easy way to display data from our document database, with low effort on the back end application.

Play with the code. The only way we make this better is to critique constructively, adapt to better ideas and grow! Some of the failed experiments have been included in the test so you can see how things have progressed. They marked as failures so you can focus on testing the DataTablesPager class. These failures are interesting though, and can provide insight to how the solution was arrived at. Also, the first time you fire up the web site that Global.ascx page will look for the test records and create them. This takes some time so if you want the wait those sections are marked for you so you comment them out and do what you need to. Enjoy.

Some gifts just keep on giving, and many times things can just take on a momentum that grow beyond your expectation. Bob Sherwood wrote to Sensei and pointed out that DataTables.net supports multiple column sorting. All you do is hold down the shift key and click on any second or third column and DataTables will add that column to sort criteria. “Well, how come it doesn’t work with the server side solution?” Talk about the sound of one hand clapping. How about that for a flub! Sensei didn’t think of that! Then panic set in – would this introduce new complexity to the DataTablePager solution, making it too difficult to maintain a clean implementation? After some long thought it seemed that a solution could be neatly added. Before reading, you should download the latest code to follow along.

How DataTables.Net Communicates Which Columns Are Involved in a Sort

If you recall, DataTables.Net uses a structure called aoData to communicate to the server what columns are needed, the page size, and whether a column is a data element or a client side custom column. We covered that in the last DataTablePager post. aoData also has a convention for sorting:

bSortColumn_X=ColumnPosition

In our example we are working with the following columns:

,Name,Agent,Center,,CenterId,DealAmount

where column 0 is a custom client side column, column 1 is Name (a mere data column), column 2 is Center (another data column), column 3 is a custom client side column, and the remaining columns are just data columns.

If we are sorting just by Name, then aoData will contain the following:

bSortColumn_0=1

When we wish to sort by Center, then by Name we get the following in aoData”

bSortColumn_0=2

bSortColumn_1=1

In other words, the first column we want to sort by is in position 2 (Center) and the second column(Name) is in position 1. We’ll want to record this some where so that we can pass this to our order routine. aoData passes all column information to us on the server, but we’ll have to parse through the columns and check to see if one or many of the columns is actually involved in a sort request and as we do we’ll need to preserve the order of that column of data in the sort.

SearchAndSortable Class to the Rescue

You’ll recall that we have a class called SearchAndSortable that defines how the column is used by the client. Since we iterate over all the columns in aoData it makes sense that we should take this opportunity to see if any column is involved in a sort and store that information in SearchAndSortable as well. The new code for the class looks like this:


public class SearchAndSortable
{
	public string Name { get; set; }
	public int ColumnIndex { get; set; }
	public bool IsSearchable { get; set; }
	public bool IsSortable { get; set; }
	public PropertyInfo Property{ get; set; }
	public int SortOrder { get; set; }
	public bool IsCurrentlySorted { get; set; }
	public string SortDirection { get; set; }

	public SearchAndSortable(string name, int columnIndex, bool isSearchable,
								bool isSortable)
	{
		this.Name = name;
		this.ColumnIndex = columnIndex;
		this.IsSearchable = isSearchable;
		this.IsSortable = IsSortable;
	}

	public SearchAndSortable() : this(string.Empty, 0, true, true) { }
}


There are 3 new additions:

IsCurrentlySorted - is this column included in the sort request.

SortDirection - “asc” or “desc” for ascending and descending.

SortOrder - the order of the column in the sort request. Is it the first or second column in a multicolumn sort.

As we walk through the column definitions, we’ll look to see if each column is involved in a sort and record what direction – ascending or descending – is required. From our previous post you’ll remember that the method PrepAOData is where we parse our column definitions. Here is the new code:


// Sort columns
this.sortKeyPrefix = aoDataList.Where(x => x.Name.StartsWith(INDIVIDUAL_SORT_KEY_PREFIX))
								.Select(x => x.Value)
								.ToList();

// Column list
var cols = aoDataList.Where(x => x.Name == "sColumns"
							& string.IsNullOrEmpty(x.Value) == false)
						.SingleOrDefault();

if(cols == null)
{
	this.columns = new List();
}
else
{
	this.columns = cols.Value
	.Split(',')
	.ToList();
}

// What column is searchable and / or sortable
// What properties from T is identified by the columns
var properties = typeof(T).GetProperties();
int i = 0;

// Search and store all properties from T
this.columns.ForEach(col =>
{
	if (string.IsNullOrEmpty(col) == false)
	{
		var searchable = new SearchAndSortable(col, i, false, false);
		var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE + i.ToString())
									.ToList();
		searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;
		searchable.Property = properties.Where(x => x.Name == col)
										.SingleOrDefault();

		searchAndSortables.Add(searchable);
	}

	i++;
});

// Sort
searchAndSortables.ForEach(sortable => {
							var sort = aoDataList.Where(x => x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
				.ToList();
sortable.IsSortable = (sort[0].Value == "False") ? false : true;
sortable.SortOrder = -1;

// Is this item amongst currently sorted columns?
int order = 0;
this.sortKeyPrefix.ForEach(keyPrefix => {
	if (sortable.ColumnIndex == Convert.ToInt32(keyPrefix))
	{
		sortable.IsCurrentlySorted = true;

		// Is this the primary sort column or secondary?
		sortable.SortOrder = order;

		// Ascending or Descending?
		var ascDesc = aoDataList.Where(x => x.Name == "sSortDir_" + order)
								.SingleOrDefault();
		if(ascDesc != null)
		{
			sortable.SortDirection = ascDesc.Value;
		}
	}

	order++;
	});
});

To sum up, we’ll traverse all of the columns listed in sColumns. For each column we’ll grab the PorpertyInfo from our underlying object of type T. This gives only those properties that will be displayed in the grid on the client. If the column is marked as searchable, we indicate that by setting the IsSearchable property on the SearchAndSortable class. This happens starting at line 28 through 43.

Next we need to determine what we can sort, and will traverse the new list of SearchAndSortables we created. DataTables will tell us what if the column can be sorted by with following convention:

bSortable_ColNumber = True

So if the column Center were to be “sortable” aoData would contain:

bSortable_1 = True

We record the sortable state as shown on line 49 in the code listing.

Now that we know whether we can sort on this column, we have to look through the sort request and see if the column is actually involved in a sort. We do that by looking at what DataTables.Net sent to us from the client. Again the convention is to send bSortColumn_0=1 to indicate that the first column for the sort in the second item listed in sColumns property. aoData will contain many bSortColum’s so we’ll walk through each one and record the order that column should take in the sort. That occurs at line 55 where we match the column index with the bSortColumn_x value.

We’ll also determine what the sort direction – ascending or descending – should be. At line 63 we get the direction of the sort and record this value in the SearchAndSortable.

When the method PrepAOData is completed, we have a complete map of all columns and what columns are being sorted, as well as their respective sort direction. All of this was sent to us from the client and we are storing this configuration for later use.

Performing the Sort

[gigya src="http://listen.grooveshark.com/songWidget.swf" width="204" height="40" flashvars="hostname=cowbell.grooveshark.com&widgetID=23379337&style=water&p=0" allowScriptAccess="always" wmode="window" ](Home stretch so play the song!!)

If you can picture what we have so far we just basically created a collection of column names, their respective PropertyInfo’s and have recorded which of these properties are involved in a sort. At this stage we should be able to query this collection and get back those properties and the order that the sort applies.

You may already be aware that you can have a compound sort statement in LINQ with the following statement:


var sortedCustomers = customer.OrderBy(x => x.LastName)
                              .ThenBy(x => x.FirstName);

The trick is to run through all the properties and create that compound statement. Remember when we recorded the position of the sort as an integer? This makes it easy for us to sort out the messy scenarios where the second column is the first column of a sort. SearchAndSortable.SortOrder takes care of this for us. Just get the data order by SortOrder in descending order and you’re good to go. So that code would look like the following:


var sorted = this.searchAndSortables.Where(x => x.IsCurrentlySorted == true)
.OrderBy(x => x.SortOrder)
.ToList();

sorted.ForEach(sort => {
    records = records.OrderBy(sort.Name, sort.SortDirection,
                         (sort.SortOrder == 0) ? true : false);
});

On line 6 in the code above we are calling our extension method OrderBy in Extensions.cs. We pass the property name, the sort direction, and whether this is the first column of the sort. This last piece is important as it will create either “OrderBy” or the “ThenBy” for us. When it’s the first column, you guessed it we get “OrderBy”. Sensei found this magic on a StackOverflow post by Marc Gravell and others.

Here is the entire method ApplySort from DataTablePager.cs, and note how we still check for the initial display of the data grid and default to the first column that is sortable.


private IQueryable ApplySort(IQueryable records)
{
	var sorted = this.searchAndSortables.Where(x => x.IsCurrentlySorted == true)
										.OrderBy(x => x.SortOrder)
										.ToList();

	// Are we at initialization of grid with no column selected?
	if (sorted.Count == 0)
	{
		string firstSortColumn = this.sortKeyPrefix.First();
		int firstColumn = int.Parse(firstSortColumn);

		string sortDirection = "asc";
		sortDirection = this.aoDataList.Where(x => x.Name == INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX + "0")
										.Single()
										.Value
										.ToLower();

		if (string.IsNullOrEmpty(sortDirection))
		{
		sortDirection = "asc";
		}

		// Initial display will set order to first column - column 0
		// When column 0 is not sortable, find first column that is
		var sortable = this.searchAndSortables.Where(x => x.ColumnIndex == firstColumn)
											.SingleOrDefault();
		if (sortable == null)
		{
			sortable = this.searchAndSortables.First(x => x.IsSortable);
		}

		return records.OrderBy(sortable.Name, sortDirection, true);
	}
	else
	{
		// Traverse all columns selected for sort
		sorted.ForEach(sort => {
		records = records.OrderBy(sort.Name, sort.SortDirection,
		(sort.SortOrder == 0) ? true : false);
		});

		return records;
	}
}

It’s All in the Setup

Test it out. Hold down the shift key and select a second column and WHAMO – multiple column sorts! Hold down the shift key and click the same column twice and KAH-BLAMO multiple column sort with descending order on the second column!!!

The really cool thing is that our process on the server is being directed by DataTables.net on the client. And even awseomer is that you have zero configuration on the server. Most awesome-est is that this will work with all of your domain objects, because we have used generics we can apply this to any class in our domain. So what are you doing to do with all that time you just got back?

faradaypennytwinsOf late, Sensei needs to keep a clear head.  That has meant learning to segment ideas and really, really, really focus on streamlined features.  This is hard.  Not because Sensei has a plethora of great ideas.  That would be a nice problem to have.  Many times in software development you end up this guy:

This is the state where you have things you want to accomplish, yet even when you pair things down to the “essential”, other essential, critical factors must be taken into consideration before you create a mess.  This is life calling, and that string which suspends that giant sword that you noticed hovering over your head is about to snap.  There is a good chance that you need more discipline, better execution tactics, far better honed chops, you name the metaphor.  Sensei has been at this game for over 22 years, and still the speed that thought takes to become reality is way too slow.

With great sarcasm you can remind your self that some of the best work lays ahead, but the reality is that you still need to fight to be fluent, you have to claw your way to a Zen state of mind / no-mind.  So chose, the art of bushido or the art of BS.  Or maybe work smarter and enjoy life.

Before Sensei leaves you, ponder this:  does “being done” mean that you’ve dropped off a product but have to get on the phone in order to make changes, and maybe now that you are struggling why couldn’t you figure out to take time when it was more critical to be fluent with your productivity?

 

ImageOn the quest to provide a rich user interface experience on his current project, Sensei has been experimenting with KnockoutJS by Steve Sanderson.  If you haven’t reviewed it’s capabilities yet  it would be well worth your while.  Not only has Steve put together a great series of tutorials, but he has been dog fooding it with Knockout.  The entire documentation and tutorial set is completed used Knockout.  Another fine source is Knockmeout.net by Ryan Niemeyer.  Ryan is extremely active on StackOverflow answering questions regarding Knockout, and also has a fine blog that offers very important insight on developing with this framework.

KnockoutJS is a great way to re-organize your client side code.  The goal of  this post is not to teach you KnocoutJS; rather, Sensei wants to point out other benefits – and a few pitfalls – to adopting its use.  In years past, it’s been difficult to avoid writing spaghetti code in Javascript.  Knockout forces you to adopt a new pattern of thought for organizing your UI implementation.  The result is a more maintainable code base.  In the past you may have written code similar to what Sensei use to write.  Take for example assigning a click event to a button or href in order to remove a record from a table:

&lt;table&gt;
  &lt;thead&gt;&lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(1); return false;&quot; href=&quot;#&quot;&gt;Customer One&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;1313 Galaxy Way&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(2); return false;&quot; href=&quot;#&quot;&gt;Customer Two&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;27 Mockingbird Lane&lt;/td&gt;
    &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;script type=&quot;text/javascript&quot;&gt;
function deleteRecord(id){
  //  Do some delete activities ...
}
&lt;/script&gt;

You might even went as far as to assign the onclick event like so:

$(document).ready(function(){
  $(&quot;tr a&quot;).on('click', function(){
    //  find the customer id and call the delete record
  });
});

The proposition offered by Knockout is much different.  Many others much more conversant in design patterns and development than Sensei can offer better technical reasons why you sound use Knockout.  Sensei likes the fact that it makes thinking about your code much simpler.  As in:

 
<td><a data-bind="click: deleteRecord($data)" href="#">Customer One</a></td>

Yep, you have code mixed in with your mark up, but so what.  You can hunt down what’s going on, switch to your external js file to review what deleteRecord is supposed to do.  It’s as simple as that.  Speaking of js files, Knockout forces you to have a more disciplined approach organizing your javascript.  Here is what the supporting javascript could look like:

var CustomerRecord = function(id, name){
  //  The items you want to appear in UI are wrapped with ko.observable
  this.id = ko.observable(id);
  this.name = ko.observable(name);
}

var ViewModel = function(){
var self = this;
  //  For our demo let's create two customer records.  Normally you'll get Json from the server
  self.customers = ko.observableArray([
    new CustomerRecord(1, &quot;Vandelay Industries&quot;),
    new CustomerRecord(2, &quot;Wiley Acme Associates&quot;)
  ]);

  self.deleteRecord = function(data){
    //  Simply remove the item that matches data from the array
    self.customers.remove(data);
  }
}

var vm = new ViewModel();
ko.applyBindings(vm);

That’s it.  Include this file with your markup and that’s all you have to do.  The html will change too.   Knockout will allow you to produce our table by employing the following syntax:

<tbody data-bind=”foreach: customers”>
<tr>
<td><a href=”#” data-bind=”click:  deleteRecord($data)”><span data-bind=”text: id”></span></a></td>
<td><span data-bind=’text: name”></span></td>
<tr>
</tbody>

These Aren’t the Voids You’re Looking For

So we’re all touchy feely because we have organization to our Javascript and that’s a good thing.  Here’s some distressing news – while Knockout is a great framework, getting the hang of it can be really hard.  Part of the reason is Javascript itself.  Because it’s a scripting language, you end up with strange scenarios where you have a property that appear to have the same name but different values.  You see, one of the first rules of using Knockout is that observables ARE METHODS.  You have to access them with (), as in customer.name(), and not customer.name.  In other words, in order for you to assign values to an observable you must:


customer.name(&quot;Vandelay Industries&quot;);

//  Don't do this - you create another property!!

customer.name = &quot;Vandelay Industries&quot;;

What? Actually, as you probably have surmised, you get .name() and .name, and this causes great confusion when you are debugging your application in Firebug.  Imagine you can see that customer.name has a value when you hit a breakpoint, but its not what you’re looking for.  Sensei developed a tactic to help verify that he’s not insane, and it works simply.  When in doubt, go the console in Firebug and access your observable via the ViewModel; so in our case you could issue:

vm.customer.name();

When name() doesn’t match your expectation you’ve most likely added a property with a typo.  Check with

vm.customer.name;

It sounds silly, but you can easily spend a half hour insisting that you’re doing the right thing, but you really confusing a property with a method.  Furthermore, observable arrays can also be a source of frustration:

// This is not the length of the observable array. It will always be zero!!!
vm.customers.length == 0;

// You get the length with this syntax
vm.customers().length;

Knock ‘em inta tamarra, Rocky

Had Sensei known the two tips before starting he would have save a lot of time.  There are many others, and they are best described by Ryan Niemeyer in his post 10 things to know about Knockout from day one.  Read this post slowly.  It will save you a lot of headache.  You may familiar with jQuery and Javascript, but Knockout introduces subtle differences that will catch you off guard.  That’s not a bad thing, it’s just different than what you may be used to.  Ryan also makes great use of JS Fiddle and answers most of his StackOverflow questions by using examples.  Those examples are in many cases easier to learn from than the tutorial since the scope is narrower than the instruction that Steve Sanderson gives.  It really allows you play along as you learn.

ImageOn the quest to provide a rich user interface experience on his current project, Sensei has been experimenting with KnockoutJS by Steve Sanderson.  If you haven’t reviewed it’s capabilities yet  it would be well worth your while.  Not only has Steve put together a great series of tutorials, but he has been dog fooding it with Knockout.  The entire documentation and tutorial set is completed used Knockout.  Another fine source is Knockmeout.net by Ryan Niemeyer.  Ryan is extremely active on StackOverflow answering questions regarding Knockout, and also has a fine blog that offers very important insight on developing with this framework.

KnockoutJS is a great way to re-organize your client side code.  The goal of  this post is not to teach you KnocoutJS; rather, Sensei wants to point out other benefits – and a few pitfalls – to adopting its use.  In years past, it’s been difficult to avoid writing spaghetti code in Javascript.  Knockout forces you to adopt a new pattern of thought for organizing your UI implementation.  The result is a more maintainable code base.  In the past you may have written code similar to what Sensei use to write.  Take for example assigning a click event to a button or href in order to remove a record from a table:

&lt;table&gt;
  &lt;thead&gt;&lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(1); return false;&quot; href=&quot;#&quot;&gt;Customer One&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;1313 Galaxy Way&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a onclick=&quot;deleteRecord(2); return false;&quot; href=&quot;#&quot;&gt;Customer Two&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;27 Mockingbird Lane&lt;/td&gt;
    &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;script type=&quot;text/javascript&quot;&gt;
function deleteRecord(id){
  //  Do some delete activities ...
}
&lt;/script&gt;

You might even went as far as to assign the onclick event like so:

$(document).ready(function(){
  $(&quot;tr a&quot;).on('click', function(){
    //  find the customer id and call the delete record
  });
});

The proposition offered by Knockout is much different.  Many others much more conversant in design patterns and development than Sensei can offer better technical reasons why you sound use Knockout.  Sensei likes the fact that it makes thinking about your code much simpler.  As in:

 
<td><a data-bind="click: deleteRecord($data)" href="#">Customer One</a></td>

Yep, you have code mixed in with your mark up, but so what.  You can hunt down what’s going on, switch to your external js file to review what deleteRecord is supposed to do.  It’s as simple as that.  Speaking of js files, Knockout forces you to have a more disciplined approach organizing your javascript.  Here is what the supporting javascript could look like:

var CustomerRecord = function(id, name){
  //  The items you want to appear in UI are wrapped with ko.observable
  this.id = ko.observable(id);
  this.name = ko.observable(name);
}

var ViewModel = function(){
var self = this;
  //  For our demo let's create two customer records.  Normally you'll get Json from the server
  self.customers = ko.observableArray([
    new CustomerRecord(1, &quot;Vandelay Industries&quot;),
    new CustomerRecord(2, &quot;Wiley Acme Associates&quot;)
  ]);

  self.deleteRecord = function(data){
    //  Simply remove the item that matches data from the array
    self.customers.remove(data);
  }
}

var vm = new ViewModel();
ko.applyBindings(vm);

That’s it.  Include this file with your markup and that’s all you have to do.  The html will change too.   Knockout will allow you to produce our table by employing the following syntax:

<tbody data-bind=”foreach: customers”>
<tr>
<td><a href=”#” data-bind=”click:  deleteRecord($data)”><span data-bind=”text: id”></span></a></td>
<td><span data-bind=’text: name”></span></td>
<tr>
</tbody>

These Aren’t the Voids You’re Looking For

So we’re all touchy feely because we have organization to our Javascript and that’s a good thing.  Here’s some distressing news – while Knockout is a great framework, getting the hang of it can be really hard.  Part of the reason is Javascript itself.  Because it’s a scripting language, you end up with strange scenarios where you have a property that appear to have the same name but different values.  You see, one of the first rules of using Knockout is that observables ARE METHODS.  You have to access them with (), as in customer.name(), and not customer.name.  In other words, in order for you to assign values to an observable you must:


customer.name(&quot;Vandelay Industries&quot;);

//  Don't do this - you create another property!!

customer.name = &quot;Vandelay Industries&quot;;

What? Actually, as you probably have surmised, you get .name() and .name, and this causes great confusion when you are debugging your application in Firebug.  Imagine you can see that customer.name has a value when you hit a breakpoint, but its not what you’re looking for.  Sensei developed a tactic to help verify that he’s not insane, and it works simply.  When in doubt, go the console in Firebug and access your observable via the ViewModel; so in our case you could issue:

vm.customer.name();

When name() doesn’t match your expectation you’ve most likely added a property with a typo.  Check with

vm.customer.name;

It sounds silly, but you can easily spend a half hour insisting that you’re doing the right thing, but you really confusing a property with a method.  Furthermore, observable arrays can also be a source of frustration:

// This is not the length of the observable array. It will always be zero!!!
vm.customers.length == 0;

// You get the length with this syntax
vm.customers().length;

Knock ‘em inta tamarra, Rocky

Had Sensei known the two tips before starting he would have save a lot of time.  There are many others, and they are best described by Ryan Niemeyer in his post 10 things to know about Knockout from day one.  Read this post slowly.  It will save you a lot of headache.  You may familiar with jQuery and Javascript, but Knockout introduces subtle differences that will catch you off guard.  That’s not a bad thing, it’s just different than what you may be used to.  Ryan also makes great use of JS Fiddle and answers most of his StackOverflow questions by using examples.  Those examples are in many cases easier to learn from than the tutorial since the scope is narrower than the instruction that Steve Sanderson gives.  It really allows you play along as you learn.

This post focuses on getting started with RavenDB, so we’ll set aside our focus on workflows for a bit.  It’s included in the ApprovaFlow series as it is an important part of the workflow framework we’re building.  To follow along you might want to get the source code.

RavenDB is a document database that provides a flexible means for storing object graphs.  As you’ll see a document database presents you with a different set of challenges than you are normally presented when using a traditional relational database.

The storage “unit” in RavenDB is a schema-less JSON document.  This takes the form of:  Because you are working with documents you now have the flexibility to define documents differently; that is, you can support variations to your data without have to re-craft your data model each time you want to add a new property to a class.  You can adopt a “star” pattern for SQL as depicted here, but querying can become difficult.  Raven excels in this situation and one such sweet spot is:

Dynamic Entities, such as user-customizable entities, entities with a large number of optional fields, etc. – Raven’s schema free nature means that you don’t have to fight a relational model to implement it.

Installing and Running RavenDB

The compiled binaries are easy to install.  Download the latest build and extract the files to a share.  Note that in order to run the console you are required to install Silverlight.  To start the server, navigate to the folder[] and double click “Start.cmd”.  You will see a screen similar to this one once the server is up and running:

The console will launch it self and will resemble this:

How To Start Developing

In Visual Studio, reference Raven with Raven.Client.Lightweight.  For CRUD operations and querying this will be all that you will need.

First you will need to connect to the document store.  It is recommended that you do this once per application.  That is accomplished with


var documentStore = new DocumentStore {Url = "http://localhost:8080"};
documentStore.Initialize();

Procedures are carried out using the Unit of Work pattern, and in general you will be using these type of blocks:


using(var session = documentStore.OpenSession())
{
   //... Do some work
}

RavenDB will work with Plain Old C# Objects and only requires an Id property of type string.  An identity key is generated for Id during this session.  If were were to create multiple steps we would have identities created in succession.  A full discussion of the alternatives to the Id property is here.

Creating a document from your POCOs’ object graphs is very straight forward:


public class Person
{
    public string FirstName { get; set; }
	public string LastName { get; set; }
	public string Id { get; set; }
	public int DepartmentId { get; set; }
    // ...
}

var person = new Person();

using(var session = documentStore.OpenSession())
{
   session.Store(person);
   session.SaveChanges();
}

Fetching a document can be accomplished in two manners:  by Id or with a LINQ query.  Here’s how to get a document by id:


string person = "Person/1";  //  Raven will have auto-generated a value for us.
using(var session = documentStore.OpenSession())
{
   var fetchedPerson = session.Load<Person>(personId);
   //Do some more work
}

You’ll note that there is no casting or conversion required as Raven will determine the object type and populate the properties for you.

There are naturally cases where you want to query for documents based on attributes other than the Id. Best practices guides that we should create static indexes on our documents as these will offer the best performance. RavenDB also has a dynamic index feature that learns from queries fired at the server and over time these dynamic indexes are memorialized.

For your first bout with RavenDB you can simply query the documents with LINQ.   The test code takes advantage of the dynamic feature.  Later you will want to create indexes based on how you most likely will retrieve the documents.  This is different that a traditional RDMS solution, where the data is optimized for querying.  A document database is NOT.

Continuing with our example of Person documents we would use:


int departmentId = 139;

using(var session = documentStore.OpenSession())
{
   var people = session.Query<Person>()
                          .Where(x => x.DepartmentId == departmentId)
                          .ToList();
}

In the source code for this post there are more examples of querying.

Debugging, Troubleshooting and Dealing with Frustration

Given that this is something new and an open source project you may find yourself searching for help and more guidelines.  One thing to avail yourself of while troubleshooting is the fact that RavenDB has REST interface and you can validate your assumptions – or worse, confirm your errors – by using curl from the command line.  For example, to create a document via http you issue:

curl -X POST http://localhost:8080/docs -d "{ FirstName: 'Bob', LastName: 'Smith', Address: '5 Elm St' }"

Each action that takes place on the RavenDB server is displayed in a log on the server console app.  Sensei had to resort to this technique when troubleshooting some issues when he first started.  This StackOverflow question details the travails.

Another area that threw Sensei for a loop at first was the nature of the RavenDB writing and maintaining indexes.  In short, indexing is a background process, and Raven is designed to be “eventually consistent”.  That means that there can be a latency between when a change is submitted, saved, and indexed in the repository so that it can be fetched via queries.  When running tests from NUnit this code did not operate as expected, yet the console reported that the document was created:


session.Store(teamMember);

int posttestCount = session.Query<TeamMember>()
                .Count();

According to the documentation you can overcome this inconsistency by declaring that you are willing to wait until RavenDB has completed its current write operation.   This code will get you the expected results:


int posttestCount = session.Query<TeamMember>()
              .Customize(x => x.WaitForNonStaleResults())
              .Count();

Depending on the number of tests you write you may wish to run RavenDB in Embedded mode for faster results.  This might prove useful for automated testing and builds.  The source code provided in this post does NOT use embedded mode; rather, you have need your server running as this gives you the opportunity to inspect documents and acclimate yourself to the database.

There is much more that you can do with RavenDB, such as creating indexes across documents, assign security to individual documents, and much more.  This primer should be enough to get you started.  Next post we’ll see how RavenDB will fit into the ApprovaFlow framework.  Grab the source, play around and get ready for the next exciting episode.

 


ActiveEngine Software by ActiveEngine, LLC.