Monday, September 07, 2009

An Introduction to Applied Evolutionary Metaheuristics

Jonathan Anderson

First delivered by me at "Selected Topics on Complex Systems Engineering" an international symposium held at Morelia, Mexico in October 2008. It was subsequently published in the European Journal of Operational Research : Applications of metaheuristics

View slide show

Abstract

This paper introduces some of the main themes in modern evolutionary algorithm research while emphasising their application to problems that exhibit real-world complexity. Evolutionary metaheuristics represent the latest breed of biologically inspired computer algorithms that promise to usefully optimise models that display fuzzy, complex and often conflicting objectives. Until recently, evolutionary algorithms have circumvented much of this complexity by defining a single objective to be optimised. Unfortunately nearly all real-world problems do not compress neatly to a single optimisation objective especially when the problem being modelled is non-linear. Recent research into multi-objective evolutionary metaheuristic algorithms has demonstrated that this single-objective constraint is no longer necessary and so new opportunities have opened up in many fields including environmental health and sustainability.

With their proven ability to simultaneously optimise multiple, conflicting objectives, evolutionary metaheuristics appear well suited to tackle ecological problems. Such algorithms deliver a range of optimal trade-off solutions that allow an appropriate profit / cost balance to be selected according to the decision maker's imperatives. This paper concludes with an examination of a powerful multi-objective evolutionary algorithm called IC-SPEA2 (Martínez-García & Anderson, 2007) and its application to a real world problem namely the maximisation of net revenue for a beef cattle farm running on temperate pastures and fodder crops in Chalco, Mexico State. Some counter-intuitive results and their impact on the farm's overall sustainability are discussed.

Sunday, August 09, 2009

The Broken Waterfall

The traditional predictive approach to project management is being rejected in favour an adaptive or Agile approach.

This is not a matter of buzz-words or faddish management technologies, instead it is a genuine commitment to help clients get the software they actually want - on time and within budget.

The Problem

There is a problem with the delivery of software. The more complex a project the greater the chance the project will be delivered over budget and behind schedule. As a project grows in complexity there comes a point where this potential for failure becomes almost a guarantee. Most experienced project managers understand this and strain their sinews to prevent it from happening and most experienced programmers have lived through the intense disappointment of seeing their work fail to achieve its initial promise. Yet time and again, despite the best efforts of genuinely talented and motivated people, software projects are delivered late, cost too much and do not function as the client expected - Why is this?

For each failed software project the problem typically turns out to be the plan. Now that may seem trivially obvious. Looking back over a failed project it is easy to suggest that if only the plan had been more precise then the project could have been more controlled and so more successful.

This is not correct.

The problem does not lie in the quality of the planning, the problem lies in the type of plan, specifically the attempt to create an up-front plan that covers the entire project life-cycle. This is not so obvious - how can you run a project without deciding what you need up-front?

To understand why up-front planning impedes the successful delivery of quality software it is first necessary to understand what is meant by a plan in this traditional sense, and then see how this concept can be dispensed with and replaced with a new type of planning mechanism.

What's in a Plan

At the start of a traditional project there is the familiar requirements-capture phase. This typically involves the writing of various specifications, a user specification that outlines the requirements in the language of the client, a functional specification that outlines the requirements in the language of the programmer and then perhaps a fully detailed technical specification that describes the requirements in a pseudo programming language.

Once complete, these detailed specifications provide the basis for all future work. They allow predictions to be made about the project's costs as well as its anticipated schedule. Specification documents also serve a secondary function. They give both the client and the engineers a form of 'contract' that, upon project delivery, allows everybody to compare what was promised with what was actually delivered.

This up-front planning process is often called the 'waterfall' model, it is a highly structured methodology that steps through requirements-capture, analysis, design, coding, and testing in a strict, pre-planned sequence. Progress is generally measured in terms of deliverable artifacts: requirement specifications, design documents, test plans and code reviews.

The Waterfall is Broken

There are good reasons why traditional, up-front planning fails. Unfortunately these reasons tend to make both clients and engineers feel uncomfortable so they are rarely spoken out loud.

Firstly, up-front planning means that the specification documents are written before any software is built. Experts, using all their intellectual powers and experience, attempt to imagine the software and in doing so mentally traverse all of its myriad details. Since no software has yet been built, the hypothetical assertions contained within these documents cannot be tested experimentally. In science an hypothesis that cannot be tested is called pseudo-science and by the same token a specification whose assumptions cannot be tested should be considered pseudo-planning.

Secondly, at the start of any reasonably complex project there is always an inescapable knowledge gap. This gap exists between:


  • The business knowledge brought by the client
  • The technical knowledge brought by the engineers

To begin with these two bodies of knowledge do not mix well as the clients do not really understand the language of software engineering and the engineers do not really understand the language of the client's specific business. This will change as time goes on and eventually the distinct bodies of information will mix and become one shared information landscape. However, at the start of a project when traditional up-front planning occurs, this inevitable knowledge gap leads to two critical and incorrect assumptions:

1. The client knows what they want their new software to do
Many clients come to a project with good idea of what they want, perhaps they have spent time and effort working this out, perhaps they have a legacy system that shows them much of they want and what they do not want. However at the start of a project the client cannot know what they want in sufficient detail to create a complete and precise plan. They can provide a business vision and they can provide business constraints but they cannot state in detail the processes required to deliver their vision because they have not yet absorbed the necessary details of the engineering environment. A superficial understanding can be gleaned during the initial planning meetings but this will not produce a sufficient understanding of the software they are commissioning.

2. The engineers know how to implement the client's business vision
Many engineers come to a project with a good idea of how to build business systems. They will have spent considerable time and effort building other, perhaps similar systems. However at the start of a project engineers cannot know how to implement the precise details of a specific business application because they have not yet absorbed the detailed business knowledge brought by the client. A superficial understanding can be gleaned during the initial planning meetings but this will not produce a sufficient understanding of the software they are being asked to deliver.

Predictive planning fails because an accurate plan requires a genuine, non-superficial understanding of both the client's business knowledge and the engineer's technical knowledge. Traditional specifications are created at the start of a project when both parties have not had enough time to come to such an understanding. It takes much effort to synthesize the two bodies of knowledge into a coherent whole, far more than can reasonably be assigned during the requirements-capture phase.

This means that plans created at the start of the project cannot be more than partially informed guesswork. Given that the nature of complex systems make them particularly sensitive to changes in small details, a plan for a complex system created with incomplete knowledge must perforce be a recipe for failure by degrees.

Does this really make up-front planning redundant? Is there a way to make the synthesis of the client and technical knowledge more efficient, perhaps by using advanced planning software? If this could be achieved then perhaps the planners could write effective up-front specifications that lead to accurate long-term costings and schedules.

Unfortunately there is another, more fundamental reason why detailed specifications must fail - regardless of their precision.

A specification is a description that attempts to outline features and functions in a natural language such as English. Yet software is actually written in the very precise syntax of a machine language. Engineers know that only computer code can truly express the details of a software vision, a natural language specification cannot be logically accurate enough. This means that natural language specifications must leave many implementation details open to interpretation forcing the engineer to skilfully choose from a set of implied options. Yet complex systems are sensitive to precisely these sorts of technical details, different choices will lead to different systems and, as often as not, unfulfilled client expectations.

Therefore, even where a specification guesses correctly, the natural language descriptions will contain subtle choices and hidden contradictions. It is only when the fuzzy language of the specification is transformed into the precise reality of the code that these choices and contradictions become apparent.

This leads to a profound truth about the nature of specifications: Greater precision does not lead to greater control. Instead the greater the precision the more varied and subtle the choices and contradictions become.

Planning For Success

Understanding these fundamental flaws at the heart of traditional software delivery, many forward looking managers and engineers are now moving towards a new project control methodology. In contrast to up-front or predictive planning this new methodology uses repeated bursts of short-term adaptive planning.

Agile Software Development throws out long-term planning and with it the traditional concept of a specification. Instead agile projects start with everybody discussing and sharing a simple vision of the end product. The vision is really no more than a mission statement that, at this early stage, explicitly removes the need for engineers to fully understand the business and for the client to fully understand the technology.

This means that an agile project can get started almost straight away, with the absolute minimum of requirements-capture. Instead of a long, costly and ultimately self-defeating planning phase, the engineers get to work building the first version (iteration) of what will become a rolling beta. Armed with a very short term plan covering just one or two weeks of work, the engineers build the first iteration and deliver it to the client for discussion and criticism. The rolling-beta is still only a sketch, an outline of the most important functions and how they might fit together. Mistakes and incorrect assumptions will have been made, indeed given the knowledge gap they cannot be avoided, but the mistakes are identified and quickly eliminated as the rolling-beta is regularly assessed by the client and engineers in close collaboration.

Once the first iteration is signed-off then the process begins again, a new short term plan is created and work begins on the second iteration. This iterative development continues and as the knowledge gap closes so the requirements and hence the software become ever more detailed and coherent.

Embracing Function Creep

As this hands-on process continues the client comes to properly understand the technical environment, what is expensive and what is possible, and as their knowledge grows so they begin to see new possibilities.

Clients changing their minds or adding new features during development is traditionally called function creep and remains the enemy of traditional planners. Yet to suppress this is to deny that clients can learn and modify their expectations as they see their software progressing. Rather than trying to ignore the client's input, the agile iterative process welcomes it as new and valuable knowledge.

Thus the client is encouraged to re-specify their product as it is being written. This is the ultimate guarantee that, in the end, the client will be satisfied. It is hard for a client to be surprised or disappointed with their software if they have played an active part in designing and deciding the goals at each iteration.

Equally, as the iterative process progresses the engineers will also come to a genuine understanding of the business. This allows the engineers to discuss the business processes with the client in a manner that allows a useful exchange of knowledge to take place. Questions to the client can be appropriately framed using the business terminology both the client and the engineers now share. Since the frequent iterations and short-term planning means that any incorrect business assumptions are quickly discovered, such mistakes can be corrected with the minimum of effort.

Engineers too, once they come to a genuine understanding of the business, can start to usefully contribute to the re-specification of the rolling-beta. New ideas and inspirations, whatever their source, can be welcomed, discussed and possibly incorporated as the software adapts over time.

Job Satisfaction

In summary, an agile software system evolves under the twin constraints of the client's business vision and the engineering environment's technical limitations. As the client and engineers come to a mutual understanding so new ideas bubble up and are incorporated as bad old ideas are identified and discarded. Before starting each iteration everybody discusses, negotiates and quickly reaches an understanding of what is actually required to fulfil the next set of short-term goals.

Thus an agile system organically grows its natural complexity out of a fundamental simplicity. As a result there are fewer surprises, the project risks are minimised and the client is more likely to get software that works.


Tuesday, April 28, 2009

Domain Driven RIA: Managing Deep Object Graphs in Silverlight-3

Using RIA Services, can a simple n-tier application manage a deep object graph with eager fetching, lazy loading and silverlight databinding?




Downloads

Note: If you have no experience with RIA Services then you may prefer to start with my previous demo, A Domain-Driven DB4O Silverlight-3 RIA, which has links to RIA Services documentation and Microsoft presentations to get you started.

Introduction
RIA Services is a Rich Internet Application (RIA) framework that promises to streamline n-tier Line of Business application development. Reading through the RIA documentation and listening to the RIA team's presentations I was struck by two things:
  • How potentially useful this framework was. 
  • How skewed the material was in favour of a data-driven design approach
In this post I want to investigate how RIA Services can be used in a Domain-Driven context with a special focus on how it can help with the eager and lazy loading of domain entities.

Where is the Database?
I have chosen not to use a relational database in this example. This is because I want to ensure that my domain instance data can be easily stored and retrieved in the most efficient and maintainable domain-centric manner. I have therefore elected to use an object datastore, in this case DB4O, which provides all the ease, speed and functionality I need.  For more information see:
The Technology Stack
  • Silverlight 3
    Handles the client-side application logic and user interface
  • RIA Services
    Provides the client<->server interaction and client-side domain
  • DB4O
    A server-side datastore for domain entity de/serialization
The Software
Here is a sneak preview of the software in action.



The Objectives
Using a combination of RIA Services and DB4O I want to test the following:
  • Server - When I fetch an instance of the aggregrate root class I expect its inner hierarchy be eagerly fetched.

  • Client - I want certain collections to be lazy-loaded and so remain unloaded until they are requested.

  • I do not expect to write my own WCF Service nor do I want to write and Data Transfer Objects (DTOs).

  • I want to databind my domain entities to silverlight controls. I expect the controls to correctly display my eagerly fetched data as well as handling lazy-loaded data.

  • Finally, I want to prove that new domain entities can be created on the client and efficiently serialized to the server-side data-store as a batched unit-of-work

The Domain


I have a small hierarchical domain consisting of a single User aggregate root that bounds a one-to-many inner collection of Holding Entities that each contain a further collection of Transaction entities. 

The test domain therefore consists of the following hierarchy:
  • User.Holdings[n].Transactions[n]
Here is the code for the domain hierarchy.

   3:  public abstract class Entity
   4:  {
   5:      [Key]
   6:      public Guid Id { get; set; }
   7:  }
   8:   
   9:  public partial class User : Entity, IAggregateRoot
  10:  {
  11:      public string Name { get; set; }
  12:      public string Password { get; set; }
  13:      private List<Holding> _holdings = new List<Holding>();
  14:      public List<Holding> Holdings
  15:      {
  16:          get { return this._holdings; }
  17:          set { this._holdings = value; }
  18:      }
  19:  }
  20:   
  21:  public partial class Holding : Entity
  22:  {
  23:      public Guid UserId { get; set; }
  24:      public string Symbol { get; set; }
  25:      private List<Transaction> _transactions = new List<Transaction>();
  26:      public List<Transaction> Transactions
  27:      {
  28:          get { return this._transactions; }
  29:          set { this._transactions = value; }
  30:      }
  31:  }
  32:  
  33:  public class Transaction : Entity
  34:  {
  35:      public Guid HoldingId { get; set; }
  36:      public TransactionType Type { get; set; }
  37:      public int Quantity { get; set; }
  38:      public decimal Price { get; set; }
  39:  }

Domain Loading Strategy
  • Server
    Fetching a User should eagerly fetch all of its dependent Holdings. Each Holding should eagerly fetch all its dependent Transactions.

  • Client
    Fetching a User should eagerly fetch all of its dependent Holdings. However due to the potential for large numbers of Transactions, each Holding should not fetch any Transactions instead the Transactions collection must be lazy-loaded.

The Datastore Setup
Before plunging in to the RIA Services code I want to show you just how easy it is to use the DB4O object database. 

In the web.config there are two application settings (shown below). 
  1. DataFile.Name
    Specifies the name of the DB4O datastore file held in the App_Data folder

  2. DataFile.GenerateSampleData
    Determines whether the datastore is reset with newly generated sample data whenever the Cassini web application is re-started (useful for testing). 
Important: Ensure the DataFile.GenerateSampleData setting is false if you want to retain any changes between application runs.

   1:  <appSettings>
   2:      <add key="DataFile.Name" value="DataStore.db4o"/>
   3:      <add key="DataFile.GenerateSampleData" value="true"/>
   4:  </appSettings>
   5:  
   6:  public static void ServerOpen()
   7:  {
   8:      if (db4oServer != null)
   9:      {
  10:          return;
  11:      }
  12:   
  13:      var filename = Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), ConfigFileName);
  14:   
  15:      var generateSampleData = bool.Parse(GenerateSampleData);
  16:      if (generateSampleData && File.Exists(filename))
  17:      {
  18:          File.Delete(filename);
  19:      }
  20:      db4oServer = Db4oFactory.OpenServer(GetConfig(), filename, 0);
  21:      if (generateSampleData)
  22:      {
  23:          SampleData.Generate();
  24:      }
  25:  }

In order to create the server-side eager fetch strategy outlined above, the DB4O datastore requires some configuration. The following GetConfig() method shows the Domain being scanned for types that implement IAggregateRoot with DB4O instructed to automatically fetch, save and delete the inner dependecies for those types.

   1:  private static IConfiguration GetConfig()
   2:  {
   3:      var config = Db4oFactory.NewConfiguration();
   4:      config.UpdateDepth(2);
   5:      var types = Assembly.GetExecutingAssembly().GetTypes();
   6:      for (var i = 0; i < types.Length; i++)
   7:      {
   8:          var type = types[i];
   9:          if (type.GetInterface(typeof (IAggregateRoot).Name) == null)
  10:          {
  11:              continue;
  12:          }
  13:          var objectClass = config.ObjectClass(type);
  14:          objectClass.CascadeOnUpdate(true);
  15:          objectClass.CascadeOnActivate(true);
  16:          objectClass.CascadeOnDelete(true);
  17:          objectClass.Indexed(true);
  18:      }
  19:      return config;
  20:  }

RIA Services
N-Tier applications are defined by the machine boundary that exists between the client and the server. Getting to grips with RIA Services begins by understanding how it tries to help you write applications that span that machine boundary. 

As you write your server-side domain code RIA Services tries to discover the way you intend to use this domain on the client. As it does so it generates a client-side version of your domain that fulfils those intentions. This means that you do not need to write a client-side version of your domain in order to use its features on the client, nor do you need to write any explicit mechanism for transferring domain instance data across the machine boundary (no WCF, no DTOs).

RIA Services discovers your intentions via a combination of Convention and Metadata. For example, I intend to utilize my User class on the client and so I need to be able to fetch User instances from the data store. This implies that somewhere I must write a server-side service method to perform the User fetch.

RIA Services simply asks that I put that User fetch service method in a class that derives from the RIA DataService class and that I follow some simple naming rules for the method signature. For more information on these conventions see .NET RIA Services Overview for Mix 2009 Preview

If I follow the prescribed conventions then RIA will  be able to determine that I intend utilizing the User class on the client and so generate a client-side version of my User class. This generated version is not the same class as my 'real' server-side User class, it only has as much or as little functionality as I decide to share (see later) but it does allow the client-code to operate as if I had access to the User class so I can use it in my silverlight code.

This is what the conventional User fetch method looks like.

   1:  [EnableClientAccess]
   2:  public class DataStore : DomainService
   3:  {
   4:   
   5:      public IQueryable<User> GetUser(string name, string password)
   6:      {
   7:          using (var db = DataService.GetSession<User>())
   8:          {
   9:              return db.GetList(x => x.Name.Equals(name) && x.Password.Equals(password)).AsQueryable();
  10:          }
  11:      }
  12:      ... other code
  13:  }

The presence of this method stimulates RIA into generating a client-side version of my User class however it will only carry over simple properties such as the User.Name and User.Password. So what happens if I want to make client-side use of a more complex property such as the User.Holdings collection? 

This is a new intention so I must tell RIA about it. Only then can RIA generate the appropriate client-side code to fulfil the new intention.

This is achieved in two steps.
  1. The Holding class must define a UserId property. When a new Holding is instantiated this property must be set to the Id of its parent User

  2. The User.Holdings Collection must be decorated with the appropriate attributes.
To decorate a  server-side domain entity with attributes targeted to client-side behaviour seems impure but fortunately RIA provides a pattern that brushes it all under the carpet and allows you to retain your domain-driven dignity. 

First you must ensure the main User class is partial. This allows you to create a new partial User segment in a separate code file called User.meta.cs. You can then add the following code to that file. In this way you can keep all the RIA meta-data tucked away in their own partial file segments.

   1:  [MetadataType(typeof (UserMetadata))]
   2:  public partial class User
   3:  {
   4:      internal sealed class UserMetadata
   5:      {
   6:          [Include]
   7:          [Association("User_Holdings", "Id", "UserId")]
   8:          public List<Holding> Holdings { get; set; }
   9:      }
  10:  }

You will note there are two attributes being used here. What are they doing? 
  • [Association]
    This attribute is informing RIA that the Holdings collection can be reconstructed on the client by comparing the User.Id to the Holding.UserId. When these match the Holding belongs to the collection. 

  • [Include]
    This attribute is more mysterious. Perhaps, like me, you might assume  it  means "Include this property in the generated code". This is not correct. In fact it means "Automatically recreate this collection on the client", in other words the client-side collection will be eagerly fetched and made available without any further intervention on your part. This is the behaviour we want for the User.Holdings collection and gives us our first clue about how we might set up the lazy loading for the Holding.Transactions collection.

RIA allows us to define the shape of our hierarchy on the client using a combination of convention for the fetch method signatures and metadata using the [Include] + [Association] attributes. But a class must also define functionality or it is just a DTO. 

Can I pick and choose the functions I want to appear in the client-side versions of my domain entities?

Sharing Domain Functions
On the client I want to add a new Holding to my User.Holdings collection. Being a conscientious domain-driven coder I want to ensure that my code follows the Law of Demeter, which means I cannot reach into the Holdings collection directly like this:

User.Holdings.Add(...)

Instead I need to write a method to do this for me:

User.AddHolding(...)

This is easy to write for my server-side domain but if I intend the same features to be available on the client I must tell RIA services about those intentions and so allow it to generate the appropriate client-side code.
  1. Ensure the class with shared features is partial

  2. Put the shared code in a partial segment stored a code file called MyClass.shared.cs

  3. Decorate the shared methods with the [Shared] attribute

Here is the code for the shared AddHolding method held in the User.shared.cs file.

   1:  public partial class User
   2:  {
   3:      [Shared]
   4:      public Holding AddHolding(Holding holding)
   5:      {
   6:          this.Holdings.Add(holding);
   7:          return holding;
   8:      }
   9:  }

More Shared Code

When I create a new Holding I would prefer to use a factory method found in my DomainFactory class. This is a useful method so I want it to be available on the client as well as the server. As it happens the Factory class also contains a number of methods I would like to share, so instead of creating a partial class and sharing out individual methods as before I can just share the entire Factory class.

The following code is held in a file DomainFactory.shared.cs

   1:  [Shared]
   2:  public class DomainFactory
   3:  {
   4:   
   5:      [Shared]
   6:      public static User User(string name, string password)
   7:      {
   8:          return new User
   9:          {
  10:              Id = Guid.NewGuid(),
  11:              Name = name,
  12:              Password = password,
  13:          };
  14:      }
  15:   
  16:      [Shared]
  17:      public static Holding Holding(User user, string symbol)
  18:      {
  19:          return new Holding
  20:          {
  21:              Id = Guid.NewGuid(),
  22:              UserId = user.Id,
  23:              Symbol = symbol
  24:          };
  25:      }
  26:   
  27:      [Shared]
  28:      public static Transaction Transaction(Holding holding, TransactionType type, int quantity, decimal price)
  29:      {
  30:          return new Transaction
  31:          {
  32:              Id = Guid.NewGuid(),
  33:              HoldingId = holding.Id,
  34:              Type = type,
  35:              Quantity = quantity,
  36:              Price = price
  37:          };
  38:      }
  39:  }

Some Client-Side Code

Now we have informed RIA about our intentions it is time to see some client-side code that shows the resulting RIA generated client domain in use. This code is taken from the silverlight application that accompanies the web application.

First of all, here is the code that does some setup and then the initial fetch for the User.

   1:  public HomePage()
   2:  {
   3:      this.InitializeComponent();
   4:      this._dataStore.Submitted += this.DataStoreSubmitted;
   5:      this._dataStore.Loaded += this.DataStoreLoaded;
   6:      this._dataStore.LoadUser("biofractal", "x", null, "LoadUser");
   7:      this.Holdings.SelectionChanged += this.Holdings_SelectionChanged;
   8:  }  
  12:  private void DataStoreLoaded(object sender, LoadedDataEventArgs e)
  13:  {
  14:      var userState = e.UserState;
  15:      if(userState==null)
  16:      {
  17:          return;
  18:      }
  19:      switch (userState.ToString())
  20:      {
  21:          case "LoadUser":
  22:              var user = e.LoadedEntities.First() as User;
  23:              this.User.DataContext = user;
  24:              this.Holdings.ItemsSource = user.Holdings;
  25:              break;
  26:      }
  27:  }

The _dataStore variable references an instance of the DataStore class which is derived from the RIA client-side DomainContext class. This class is auto-generated by RIA Services. It is the primary RIA generated artefact.

The DataStore.LoadUser() calls the GetUser() service method on the server. This is an asynchronous service call so the return must be caught in the DataStore.Loaded() event handler. Here the silverlight controls can be data-bound to their data sources and, because the User.Holdings collection was decorated with the [Include] attribute, RIA will ensure that it is automatically fetched. Using the Holdings collection as a binding data source will therefore display the correct list of Holdings for the current User without requiring an explicit fetch.

Lazy Loading the Transactions
In contrast to the User.Holdings collection, the Holding.Transactions collection is not automatically loaded when the User is initially fetched. Instead the client-side domain behaviour requires that the Transactions collection is lazy loaded on-demand. How is this achieved using RIA Services?

As before, the metadata is used to inform RIA of our intentions. The [Association] attribute is again used to decorate the collection definition in a partial class segment held in distinct code file (Holding.meta.cs). However this time there is no [Include] attribute.

   1:  [MetadataType(typeof (HoldingMetadata))]
   2:  public partial class Holding
   3:  {
   4:      internal sealed class HoldingMetadata
   5:      {
   6:          #region Properties
   7:  
   8:          [Association("Holding_Transactions", "Id", "HoldingId")]
   9:          public List<Transaction> Transactions { get; set; }
  10:   
  11:          #endregion
  12:      }
  13:  }

As a result RIA Services will generate the appropriate client-side code for the manipulation of Transactions however as there is no [Include] attribute RIA will not automatically fetch the members of a Transactions collection when its parent Holding is instantiated.

To manually load a list of Transactions it is necessary to write a parameterized server-side service method to perform the datastore lookup.

   1:  [EnableClientAccess]
   2:  public class DataStore : DomainService
   3:  {
   4:      ... other code
   5:   
   6:      public IQueryable<Transaction> GetTransactionsForHolding(Guid holdingId)
   7:      {
   8:          using (var db = DataService.GetSession<Transaction>())
   9:          {
  10:              return db.GetList(x => x.HoldingId.Equals(holdingId)).AsQueryable();
  11:          }
  12:      }
  13:  }

The GetTransactionsForHolding(...) method is scanned by RIA Services causing it to generate a client-side equivalent method on the DataStore class. This can then be used in client-side code to fetch a set of Transactions belonging to a specified Holding. The code below shows this happening. The call is being made within the SelectionChanged event of the Accordion control.

   1:  private void Holdings_SelectionChanged(object sender, SelectionChangedEventArgs e)
   2:  {
   3:      if (e.AddedItems.Count == 0)
   4:      {
   5:          return;
   6:      }
   7:      var holding = e.AddedItems[0] as Holding;
   8:      if (holding == null || holding.Transactions.Count > 0)
   9:      {
  10:          return;
  11:      }
  12:      this._dataStore.LoadTransactionsForHolding(holding.Id);
  13:  }

When an Accordion item is opened by a user click it fires the SelectionChanged event above. The newly selected Holding is extracted from the Accordion and its Holding.Id is passed into the RIA generated LoadTransactionsForHolding(...) method. This automatically calls the GetTransactionsForHolding(...) service method which returns the appropriate list of Transactions for the specified Holding.Id.

Where do these Transactions go? How is it that simply calling this method automatically fills the correct Holding.Transactions collection and displays that collection in the data-bound Accordion?

The list of Transactions is loaded into a flat list of Transactions generated and maintained by RIA Services. When a Holding.Transactions collection is requested RIA will dynamically create and return the correct list of Transactions as a conseqence of the information specified in the [Association] attribute. This is why each Transaction needs a HoldingId and each Holding a UserId. Finally, because RIA generated collections are ObservableCollections then changes automatically stimulate any data-bound containers to refresh themselves.

This means that a call to the LoadTransactionsForHolding() method will set off a chain of events that results in the lazy-loading of the selected list of Holding.Transactions and its subsequent display in the newly expanded Accordion item.

Creating and Saving Domain Instances
RIA Services makes the creating and saving new domain instances particularly easy. Once again the process begins with a statement of intention. This time RIA must be informed of our intention to add new Holdings to the User.Holdings collection and new Transactions to the Holding.Transactions collection. This is achieved via convention, by adding service methods whose signatures follow the convention shown below. 


   1:  [EnableClientAccess]
   2:  public class DataStore : DomainService
   3:  {
   4:      ...other code
   5:   
   6:      public void CreateHolding(Holding holding)
   7:      {
   8:          using (var db = DataService.GetSession<User>())
   9:          {
  10:              var user = db.GetFirst(x => x.Id.Equals(holding.UserId));
  11:              user.AddHolding(holding);
  12:              db.Save(user);
  13:          }
  14:      }
  15:   
  16:      public void CreateTransaction(Transaction transaction)
  17:      {
  18:          using (var db = DataService.GetSession<Holding>())
  19:          {
  20:              var holding = db.GetFirst(x => x.Id.Equals(transaction.HoldingId));
  21:              holding.AddTransaction(transaction);
  22:              db.Save(holding);
  23:          }
  24:      }
  25:  }

Adding these service methods tells RIA that we intend to add new Holdings and Transactions via client-side code. Without these methods any attempt to add an item will result in a runtime error. For example, if the CreateHolding() method above is commented out and a new Holding is added to the User.Holdings collection via client-side code, the following error is displayed.




Serializing New Entities
Domain entities added on the client are not automatically serialized to the server-side data-store. Instead RIA services keeps track of the changes you have made so that when a save is requested only the changes are submitted for server-side serialization. 

This is a good example of the Unit of Work pattern and in this way RIA helps to minimise the traffic over the wire as well as giving you much more flexibility with respect to rolling back or cancelling changes, providing save on demand or automatic timed-interval saves.

The following code shows how to add and save new domain items.


   1:  public partial class HomePage : Page
   2:  {
   3:      private readonly DataStore _dataStore = new DataStore();
   4:      private ProgressDialog _progressDialog;
   5:   
   6:      public HomePage()
   7:      {
   8:          this.InitializeComponent();
   9:          this._dataStore.Submitted += this.DataStoreSubmitted;
  10:      
  11:          ...other code
  12:      }
  13:   
  14:      private void ShowProgressDialog(string message)
  15:      {
  16:          this._progressDialog = new ProgressDialog(message);
  17:          this._progressDialog.Show();
  18:      }
  19:   
  20:      private void DataStoreSubmitted(object sender, SubmittedChangesEventArgs e)
  21:      {            
  22:          if (e.EntitiesInError.Count() != 0)
  23:          {
  24:              this._progressDialog.ShowError();
  25:          }
  26:          else
  27:          {
  28:              this._progressDialog.Close();
  29:          }
  30:      }
  31:   
  32:      private void SubmitChanges_Click(object sender, RoutedEventArgs e)
  33:      {
  34:          this.ShowProgressDialog("Saving Changes...");
  35:          this._dataStore.SubmitChanges();
  36:      }
  37:   
  38:      private void NewHolding_Click(object sender, RoutedEventArgs e)
  39:      {
  40:          var user = ((User) this.User.DataContext);
  41:          if (user == null)
  42:          {
  43:              return;
  44:          }
  45:          user.Holdings.Add(DomainFactory.Holding(user, NewHoldingSymbol.Text));
  46:          this.Holdings.SelectedItem = this.Holdings.Items[this.Holdings.Items.Count-1];
  47:      }
  49:      private void Buy_Click(object sender, RoutedEventArgs e)
  50:      {
  51:          var holding = ((Button) e.OriginalSource).DataContext as Holding;
  52:          if (holding == null)
  53:          {
  54:              return;
  55:          }
  56:          holding.AddTransaction(DomainFactory.Transaction(holding, TransactionType.Buy, 42, 0.42m));
  57:      }
  58:   
  59:      private void Sell_Click(object sender, RoutedEventArgs e)
  60:      {
  61:          var holding = ((Button) e.OriginalSource).DataContext as Holding;
  62:          if (holding == null)
  63:          {
  64:              return;
  65:          }
  66:          holding.AddTransaction(DomainFactory.Transaction(holding, TransactionType.Sell, 42, 0.42m));
  67:      }
  68:   
  69:      ...other code
  70:   
  71:  }

This code shows how to add new domain items to their correct location in the domain hierarchy using the shared DomainFactory class discussed earlier. These changes are then asynchronously submitted as a batched unit of work to the server, displaying a progress dialog to keep the user informed. The return is trapped so that the progress dialog can be dismissed and any errors displayed.

The Verdict
How did RIA Services and DB4O manage?
  • Server - When I fetch an instance of the aggregrate root class I expect its inner hierarchy be eagerly fetched.
    The server-side domain de/serialisation behaviour was handled by DB4O. Being an object database it is simple to create this behaviour using a few lines of initialisation code.

  • Client - I want certain collections to be lazy-loaded and so remain unloaded until they are requested.
    RIA Services provides a set of attributes that allow both eager and lazy loading to be specified as client-side behaviour and wired up with minimal code.

  • I do not expect to write my own WCF Service nor do I want to write and Data Transfer Objects (DTOs).
    RIA Services replaces the explicit WCF layer with an implicit data transafer layer via its DomainService class and the data manipulation  methods you write to extend it. 

  • I want to databind my domain entities to silverlight controls. I expect the controls to correctly display my eagerly fetched data as well as handling lazy-loaded data.
    Because RIA Services generates its own observable collections the silverlight databinding flows smoothly with little intervention. The lazy loading of new data stimulates the silverlight bound controls to refresh and so display changes as they occur.

  • Finally, I want to prove that new domain entities can be created on the client and efficiently serialized to the server-side data-store as a batched unit-of-work.
    RIA Services implements a Unit of Work pattern that allows only those items that have been changed to be batched and serialized to the server-side data-store when required.

I think that RIA Services plus DB4O performed well in handling the demands of my simple Line of Business Rich Internet Application. I would certainly recommend you try it out for yourself to see what you think. Good Luck.

Tuesday, March 31, 2009

A Domain-Driven, DB4O Silverlight RIA

Build a Domain Driven Rich Internet Application using Silverlight, RIA Services and DB4O
I was recently ranting about Silverlight-2 and my annoyance with the WCF layer needed to serialize object instance data to the server. Well Microsoft must have had their mind-reading machines turned up high because the newly announced Mix09 Silverlight 3 & RIA services preview has solved nearly all the problems I was having.

Its the RIA (Rich Internet Application) Services that has the real wow-factor. This is Silverlight finally growing up. Instead of creating Frankenstein client-server apps crudely stitched together with WCF, RIA services allows you to treat your client and server as one, almost seamless application. You share domain design intentions between the client and the server so that your domain acts as you intended regardless of which side of the machine boundary you are on. This is how RIA development was always meant to be. 

You can download all the software kit you need from here the main Silverlight-3 site. I found the best way to get started was to watch the following Mix09 presentations then read the RIA Services Overview.
  • Building Amazing Business Centric Applications with Microsoft Silverlight 3
    "Come hear how simple it is to build end-to-end data-intensive Silverlight applications with the new set of features in Silverlight 3 and .NET RIA Services. Explore Silverlight improvements that help to enable rapid development for business applications and to make your development process more productive"

  • Building Data-Driven Applications in ASP.NET and Silverlight
    "Learn how Microsoft is simplifying the traditional n-tier application pattern by bringing together ASP.NET and Silverlight. Learn about patterns for working with data, implementing reusable and independently testable application logic, and application services that readily scale with growing requirements"
Of course Microsoft just could not help plugging their dystopian Data-Driven-Design vision. Look at the title of that second presentation for goodness sake. 

To counter this and to provide a solid Domain-Driven template for future Silverlight RIA apps I have created an example that does away with the monolithic SqlServer and Igor, the Entity Framework, in favour of an light, fast object database repository that promotes good Code Cohesion, Separation of Concerns and Inversion of Control.

Here is the tech I use in the example:

An Overview of the Domain Driven RIA Example
This example does just enough to highlight the primary functions of RIA services as I see them.
  1. You can define and use your domain objects on the server and then effectively re-use domain logic on the client. 

  2. You can transmit domain objects between the client and server without polluting your domain or requiring an additional DTO transformation layer.
The example is a stripped down version of an app I am currently writing so I have left in all the domain structure even if it is not actually used in the example. This makes the application ready to go if you want to start fleshing it out with your own code.

The functionality is quite simple. You are presented with a Silverlight 3 navigation application. There is a sign-in link that takes you a view containing a username and password field, sign-in and register buttons. 

The code works as you would expect with the additional extra that typing into the Name field triggers a 1 second timer that will check (on the server) that the name is unique. Clicking the Register button creates a new user in the server-side DB40 database. Clicking the Sign In button checks the db for the supplied credentials and, if they exist will return the appropriate User instance and cleverly navigate you back to the home view. 

If you have watched the videos you might at this point think I have cheated and used the built in Authentication Domain Service that come with RIA. Not so. That service relies on having a fat SQLServer file squatting in your solution and that is no good for a Domain-Driven purist. Instead I have implemented a very simple custom identity authentication that could easily be linked up to either Forms or Windows authentication in the usual way.



NB: If you get an error about the URI Prefix then you need to reset the startup application to DomainDrivenRIA.Web and the startup page to DomainDrivenRIA.SilverlightTestPage.aspx

Tuesday, March 17, 2009

Convert dotnet assemblies to Silverlight

The SilverLighter
I have recently been trying to convert an object database (db40 and NeoDatis) for use in a Silverlight application. Along the way I discovered a few interesting things. For example I found that although Visual Studio will not allow you to reference dotnet assemblies from a Silverlight application or class library, this restriction is actually a bit heavy handed. Sometimes it is very useful to use a known Silverlight-safe dotnet assembly, as long as you take responsibility for your own actions.

To enable the re-use of dotnet assemblies in Silverlight I wrote a handy WPF application called  "The Silverlighter". This tool allows you  to convert any dotnet assembly into a Silverlight assembly ready for use in your Silverlight applications or class libraries. 

The knowledge I needed to write The Silverlighter was gleaned primarily from the excellent article by David Betz called Reusing .NET Assemblies in Silverlight. This article clearly explains how the dotnet->Silverlight conversion process works and why it is not a hack. I thought this article was a fascinating insight into the similarities between the dotnet and Silverlight assemblies and is well worth a read.

But before you get carried away (I know I did) and imagine that you are just a click away from re-using your favourite 3rd party assemblies, a word or two of caution. 

Unfortunately just because you can reference an assembly from Silverlight does not mean it will work with Silverlight. If you have read David Betz' article you will know that Silverlight uses a distinct set of System.* assemblies (v 2.0.5.0). These assemblies do not contain all the features of their equivalent dotnet assemblies. For example the following collection types are missing in Silverlight:
  • ArrayList
  • Hashtable
  • SortedList
  • NameValueCollection
Instead Silverlight only allows generic collections to be used, which is a good thing unless your referenced ex-dotnet assembly happens to use one or more of the missing types, in which case your code will blow up with an error similar to "cannot load type ArrayList from assembly System 2.0.5.0"

Replacing these missing collections with the generic equivalents (you can decompile most assembly code using Reflector) is actually quite trivial however more serious problems are lurking. Silverlight is also missing core features such as:
You can see that Silverlight has removed or redesigned features that are security risks. Again that is a good thing, until you try to re-use assemblies that use these features - then boom.

Anyway, The Silverlighter app does some cool IL trickery so if you have dotnet dlls you know are fine and just want to use them in Silverlight without any nonsense from Visual Studio then it might be just the thing you need.

Additional Notes

There are a few options available to help tweak the functionality.


You can choose to convert only selected System assemblies, although it is recommended that you just leave them all selected unless you have a good reason not to.

The "Recursively process dependent assemblies" feature will, when checked, pick out references from the IL to non System assemblies (your own or 3rd party references) and recursively convert these to Silverlight compliant assemblies as well. The entire dependency tree will be processed in this way.

Finally the path to the ILdasm.exe is exposed just in case you have it at a different location on your system. If you don't have ILdasm.exe anywhere you can get by installing .NET Framework 2.0 Software Development Kit (SDK)

If you notice any bugs or want to add new features then please feel free to checkout the source code and make the updates. Just paste the SVN url into your TortoiseSVN repo-browser, check out the source code under subversion and you are off and running. Good luck.

SVN URL = http://subversion.assembla.com/svn/biofractal/trunk/Blog/Silverlighter


Monday, March 16, 2009

Silverlight and Object Databases

Stop Press: Most of the issues below have now been resolved with the release of Silverlight-3 & RIA Services. See the following post for more details - A Domain-Driven, DB4O Silverlight-3 RIA

I recently decided to write myself a quick Silverlight application for a bit of fun. I wanted an app that could grab stock quotes, do a calculation of my current losses and maybe have some nice blue-gel buttons and what-not. A few evenings of light coding pleasure and good break from all that architectural stuff.  Naturally it did not quite work out that way.

I got a wireframe working pretty quickly but that was not good enough. I wanted to serialize my data so I could start to get some fancy Silverlight graphs with bouncy bars. So I found myself grabbing multiple quotes and using the data to instantiate domain objects in the Silverlight client. Now what? I needed to serialize these instances on the web server. So how do I do that?

I had two options

1. Use isolated storage on the client then use Mesh to synch the data. The only problem with this approach is that the last sentence is the total extent of my knowledge. I want a slope not a cliff face.

2. Use a good old database on the server, why not, and since I am about it lets make that an object database so we don't have to concern ourselves with all that old-fashioned NHibernate, ORM tosh (ah - how quickly I become intolerant).

I chose option #2. And right there it started to get annoying. What I really wanted to do was just use my ODB in the Silverlight client code as if I was coding normally. But of course the Silverlight sandbox had lots to say about that.

To get around this you must write a WCF service layer. Call this an API and you will feel much better because that makes it sound quite cool and computery. You need this comfort blanket because you are about to climb into a time machine a go back 10 years to when you were a script kiddie banging out ASP code.

This WCF 'API' service is going to be the only means you have to serialize data and transmit it between the Silverlight client and the web server. The [DataContract] will require [DataMember] methods to cover every possible action, just like writing all those CRUD sprocs for a relational DB. Of course you could break this jumbled, untestable mess into smaller WCF contracts but that is just more fantasy to cover the poor design, like putting prefixes on your sproc names so they group together to simulate a business layer.

No, the real solution is to do away with the WCF data-layer before you even start. Now don't get me wrong, WCF is fine and very useful but I think it is an abuse to use it a domain serialization interface. We just got rid of RDB DALs so why create another DAL for SaaS?

What is needed is grown up client / server relationship. That would be much better. Then, instead of writing a sprawling WCF thunking layer replete with DTO auto-mappers and all that junk paraphanalia, you get a nice generic client-centric solution with code like this:

myODB.Store(myObject) ;
myODB.Commit();

This focuses coding attention away from the boilerplate WCF data transformation layer and back on to the business useful world of the Domain - where it belongs.

By the way, you can use dotnet assemblies in a Silverlight project. It needs a few quick (but legitimate) 'fixes' to the IL to get them past the Visual Studio fascist guards. I am finishing off a tool that automates the IL work for you makes it nice and easy to convert dotnet assemblies into Silverlight reference-ready assemblies. I will post it here soon.

Stop Press: You can now download the IL Tool mentioned above. See the following blog post for more details - Convert dotnet assemblies to Silverlight

Friday, February 27, 2009

Generic Lists of Anonymous Type

[cross-posted to StormId blog]

Anonymous types can be very useful when you need a few transient classes for use in the middle of a process.

Of course you could just write a class in the usual way but this can quickly clutter up your domain with class definitions that have little meaning beyond the scope of their transient use as part of another process.

For example, I often use anonymous types when I am generating reports from my domain. The snippet below shows me using an anonymous type to store data values that I have collected from my domain.

for (var k = 0; k < optionCount; k++)
{
    var option = options[k];
    var optionTotal = results[option.Id];
    var percent = (questionTotal > 0) ? ((optionTotal/(float)questionTotal) * 100): 0;
    reportList.Add(new
        {
            Diagnostic = diagnostic.Name, 
            Question = question.Text, 
            Option = option.Text, 
            Count = optionTotal, 
            Percent = percent
        });
}

Here I am generating a report on the use of diagnostics (a type of survey). It shows how often each option of each question in each diagnostic has been selected by a user, both count and percent.

You can see that the new anonymous type instance is being added to a list called reportList. This list is strongly typed as can been seen by this next bit of code where I order the list using LINQ.

reportList = reportList
    .OrderBy(x => x.Diagnostic)
    .ThenBy (x => x.Question)
    .ThenBy (x => x.Percent)
    .ToList();

This is where the problem comes in, how is it possible to create a strongly typed (generic) list for an anonymous type? The answer is to use a generics trick, as the following code snippet shows.

public static List<T> MakeList<T>(T example)
{
    return new List<T>();
}

The MakeList method takes in a parameter of type <T> and returns a generic list of the same type. Since this method will accept any type then we can pass an anonymous type instance with no problems. The next snippet shows this happening.

var exampleReportItem = new
    {
        Diagnostic = string.Empty, 
        Question = string.Empty, 
        Option = string.Empty, 
        Count = 0, 
        Percent = 0f
    };
var reportList = MakeList(exampleReportItem);

So here is the context for all these snippets. The following code gathers my report data and stores it in a strongly typed list containing a transient anonymous type.

var exampleReportItem = new
    {
        Diagnostic = string.Empty, 
        Question = string.Empty, 
        Option = string.Empty, 
        Count = 0, 
        Percent = 0f
    };
var reportList = MakeList(exampleReportItem);
for (var i = 0; i < count; i++)
{
    var diagnostic = diagnostics[i];
    var questionCount = diagnostic.Questions.Count;
    for (var j = 0; j < questionCount; j++)
    {
        var question = diagnostic.Questions[j];
        var questionTotal = results[question.Id];
        var options = question.Options;
        var optionCount = options.Count;
        for (var k = 0; k < optionCount; k++)
        {
            var option = options[k];
            var optionTotal = results[option.Id];
            var percent = (questionTotal > 0) ? ((optionTotal/(float)questionTotal) * 100): 0;
            reportList.Add(new
                {
                    Diagnostic = diagnostic.Name, 
                    Question = question.Text, 
                    Option = option.Text, 
                    Count = optionTotal, 
                    Percent = percent
                });
        }
    }
}

Perhaps you are wondering how the type of the anonymous exampleReportItem is the same as the type of the anonymous object I add to the reportList?

This works because of the way the type identities are assigned for anonymous types. If two anonymous types share the same public signature, that is if their property names and types are the same (you can't have methods on anonymous types) then the compiler treats them as the same type.

This is how the MakeList method can do its job. The exampleReportItem instance sent to the MakeList function has exactly the same properties as the anonymous type added to the generic reportList. Because they have the same signatures then they are recognised as the same anonymous type and all is well.