Wednesday, July 03, 2013

Electro-Flock: Modelling Flocks using Simple Electro-Magnetism


Flocks are easy to model if you use physics instead of biology

Look - Its all done with magnets!
Electro-Flock is a simple flocking algorithm that relies only on Coulomb's Law, the single physical equation that describes how electrically charged particles interact. The project uses CoffeeScript, JQuery, HTML5 and Processing.js.

I started this project with the feeling that flocking algorithms were too specific. I wanted to see if I could write a really simply algorithm that didn't know it was meant to be a flocking algorithm. I had a feeling that electro-magnetism might just be the key to a really fundamental, self-organising flock. Was this feeling justified? -  Judge for yourself Play with the Electro-Flock Demo

Saturday, May 04, 2013

Self Organising Fun : A Force Directed Graph in CoffeeScript


Emergent Self Organising Behaviour using CoffeeScript, JQuery and Processing.js

What is a Force Directed Graph and why bother spending time coding one in CoffeeScript? 
A Force Directed Graph is a collection of nodes and links that self-organises until its nodes are as far apart as possible and the links do not cross. They appeal because self-organisation is just so intrinsically fascinating and, more professionally, because the project allowed me to bring together a whole set of exciting web technologies including CoffeeScript, JQuery, HTML5 and Processing.js.

Before reading any further why not Play with the Force Directed Graph Demo

Or if you really want to blow your mind why not take a look at the innovative and beautiful flocking system based solely on the principles of electromagnetism Electro-Flock: Modelling Flocks using Simple Electro-Magnetism.

They did it all by themselves
How did it turn out? 
I started this project wondering if CoffeeScript would be worth learning and finished it vowing never again to write another line of naked JavaScript. I also found the combination of CoffeeScript and JQuery to be a powerful and elegant solution to the problem of coding against the browser. Finally, using the Processing.js library coupled to an Html5 canvas  tag meant I could tap the GPU of the client's graphics card and so achieve the smooth visualisation of forces and vectors I wanted.

Monday, September 10, 2012

Goodbye OpenRasta - Hello Nancy


This is a painful post to write as I still feel a strong residual loyalty to OpenRasta (OR) however I have now moved over the Nancy and so I would like to give you my reasons why. These are not technical reasons, far from it, instead I give you nothing more than sorrowful account of feelings hurt, a comparison between the brutal, distant and dominant love provided by OR versus the beautiful, seductive and submissive framework that is sweet Nancy.

Wednesday, May 09, 2012

RavenDB or SQL Server? Which one should I use?

SQL Server is not a database for storing anything except ad-hoc reporting data. It is splendid for that, ad-hoc reports, data-mining, real time relationship discovery - and its optimal for these uses because way back when Codd designed the rules, that is what it was designed for (see The Data Driven Conspiracy for more details)

Of course you can store other types of data in a relational database. For example you can serialise your domain state to a SQL Server database and retrieve that state at a later date but this is a terrible use of a relational system. So bad, in fact, that whole layers of code (so called 'data-layers', or DALs), many thousands of lines, are required to make this kind of task remotely do-able, testable, maintainable.

The great, decades long confidence trick has been for the SQL based retailers to convince us that there was no better way to store hierarchical / document data other than real-time conversion between radically different data-structures. DALs and the latest ORM have all been trumpeted but, in the end, these are nothing more than codecs to thunk your bits back and forth between different data organisations - different patterns on the disk.

Madness!

Agile and Cost Breakdowns


At the beginning of an Agile project it is often very tricky to provide a cost breakdown for individual items of work. This is because central to the Agile methodology is the demand that at the start of a project everybody admits they do not really know what details to expect! This is a brave thing to do. It is brave of a developer to admit that the implementation details of a proposed project are not clear. It is brave of a client to accept that they do not really know, in real-life detail, what it is they want. The pay back for all this honesty is a much better chance that the project will be useful, delivered on time and delivered within budget.

Friday, April 02, 2010

Predicting complex, dynamic systems or why Stalin, Hitler and stock market analysts have failed us

The ability to predict a future event lies at the heart of what it is to be human. A hunter predicting the presence of animals at a waterhole can drastically optimise the chances of a successful kill. Empathetically predicting a rival's or ally's actions can determine the outcome in hierarchical social competition. Predicting in advance the consequences of leaving a small fire unquenched can save the lives of your family. The need to predict the consequences of complex, dynamic systems has been a primary driver of human evolution, both biological and technical.

What does it mean to predict something? It is a simple question with a relatively straightforward answer but its misapplication can have profound and disturbing consequences. Mistaking the systems that can or cannot be usefully predicted is a core political and financial flaw of the last 100 years that has lead to spectacularly negative consequences, from the rise of totalitarianism, both right and left, to financial bubbles whose froth still fizzes through our financial institutions today.

Monday, September 07, 2009

An Introduction to Applied Evolutionary Metaheuristics

Jonathan Anderson

First delivered by me at "Selected Topics on Complex Systems Engineering" an international symposium held at Morelia, Mexico in October 2008. It was subsequently published in the European Journal of Operational Research : Applications of metaheuristics

View slide show

Abstract

This paper introduces some of the main themes in modern evolutionary algorithm research while emphasising their application to problems that exhibit real-world complexity. Evolutionary metaheuristics represent the latest breed of biologically inspired computer algorithms that promise to usefully optimise models that display fuzzy, complex and often conflicting objectives. Until recently, evolutionary algorithms have circumvented much of this complexity by defining a single objective to be optimised. Unfortunately nearly all real-world problems do not compress neatly to a single optimisation objective especially when the problem being modelled is non-linear. Recent research into multi-objective evolutionary metaheuristic algorithms has demonstrated that this single-objective constraint is no longer necessary and so new opportunities have opened up in many fields including environmental health and sustainability.

With their proven ability to simultaneously optimise multiple, conflicting objectives, evolutionary metaheuristics appear well suited to tackle ecological problems. Such algorithms deliver a range of optimal trade-off solutions that allow an appropriate profit / cost balance to be selected according to the decision maker's imperatives. This paper concludes with an examination of a powerful multi-objective evolutionary algorithm called IC-SPEA2 (Martínez-García & Anderson, 2007) and its application to a real world problem namely the maximisation of net revenue for a beef cattle farm running on temperate pastures and fodder crops in Chalco, Mexico State. Some counter-intuitive results and their impact on the farm's overall sustainability are discussed.

Sunday, August 09, 2009

The Broken Waterfall

The traditional predictive approach to project management is being rejected in favour an adaptive or Agile approach.

This is not a matter of buzz-words or faddish management technologies, instead it is a genuine commitment to help clients get the software they actually want - on time and within budget.

The Problem

There is a problem with the delivery of software. The more complex a project the greater the chance the project will be delivered over budget and behind schedule. As a project grows in complexity there comes a point where this potential for failure becomes almost a guarantee. Most experienced project managers understand this and strain their sinews to prevent it from happening and most experienced programmers have lived through the intense disappointment of seeing their work fail to achieve its initial promise. Yet time and again, despite the best efforts of genuinely talented and motivated people, software projects are delivered late, cost too much and do not function as the client expected - Why is this?

For each failed software project the problem typically turns out to be the plan. Now that may seem trivially obvious. Looking back over a failed project it is easy to suggest that if only the plan had been more precise then the project could have been more controlled and so more successful.

This is not correct.

The problem does not lie in the quality of the planning, the problem lies in the type of plan, specifically the attempt to create an up-front plan that covers the entire project life-cycle. This is not so obvious - how can you run a project without deciding what you need up-front?

To understand why up-front planning impedes the successful delivery of quality software it is first necessary to understand what is meant by a plan in this traditional sense, and then see how this concept can be dispensed with and replaced with a new type of planning mechanism.

What's in a Plan

At the start of a traditional project there is the familiar requirements-capture phase. This typically involves the writing of various specifications, a user specification that outlines the requirements in the language of the client, a functional specification that outlines the requirements in the language of the programmer and then perhaps a fully detailed technical specification that describes the requirements in a pseudo programming language.

Once complete, these detailed specifications provide the basis for all future work. They allow predictions to be made about the project's costs as well as its anticipated schedule. Specification documents also serve a secondary function. They give both the client and the engineers a form of 'contract' that, upon project delivery, allows everybody to compare what was promised with what was actually delivered.

This up-front planning process is often called the 'waterfall' model, it is a highly structured methodology that steps through requirements-capture, analysis, design, coding, and testing in a strict, pre-planned sequence. Progress is generally measured in terms of deliverable artifacts: requirement specifications, design documents, test plans and code reviews.

The Waterfall is Broken

There are good reasons why traditional, up-front planning fails. Unfortunately these reasons tend to make both clients and engineers feel uncomfortable so they are rarely spoken out loud.

Firstly, up-front planning means that the specification documents are written before any software is built. Experts, using all their intellectual powers and experience, attempt to imagine the software and in doing so mentally traverse all of its myriad details. Since no software has yet been built, the hypothetical assertions contained within these documents cannot be tested experimentally. In science an hypothesis that cannot be tested is called pseudo-science and by the same token a specification whose assumptions cannot be tested should be considered pseudo-planning.

Secondly, at the start of any reasonably complex project there is always an inescapable knowledge gap. This gap exists between:


  • The business knowledge brought by the client
  • The technical knowledge brought by the engineers

To begin with these two bodies of knowledge do not mix well as the clients do not really understand the language of software engineering and the engineers do not really understand the language of the client's specific business. This will change as time goes on and eventually the distinct bodies of information will mix and become one shared information landscape. However, at the start of a project when traditional up-front planning occurs, this inevitable knowledge gap leads to two critical and incorrect assumptions:

1. The client knows what they want their new software to do
Many clients come to a project with good idea of what they want, perhaps they have spent time and effort working this out, perhaps they have a legacy system that shows them much of they want and what they do not want. However at the start of a project the client cannot know what they want in sufficient detail to create a complete and precise plan. They can provide a business vision and they can provide business constraints but they cannot state in detail the processes required to deliver their vision because they have not yet absorbed the necessary details of the engineering environment. A superficial understanding can be gleaned during the initial planning meetings but this will not produce a sufficient understanding of the software they are commissioning.

2. The engineers know how to implement the client's business vision
Many engineers come to a project with a good idea of how to build business systems. They will have spent considerable time and effort building other, perhaps similar systems. However at the start of a project engineers cannot know how to implement the precise details of a specific business application because they have not yet absorbed the detailed business knowledge brought by the client. A superficial understanding can be gleaned during the initial planning meetings but this will not produce a sufficient understanding of the software they are being asked to deliver.

Predictive planning fails because an accurate plan requires a genuine, non-superficial understanding of both the client's business knowledge and the engineer's technical knowledge. Traditional specifications are created at the start of a project when both parties have not had enough time to come to such an understanding. It takes much effort to synthesize the two bodies of knowledge into a coherent whole, far more than can reasonably be assigned during the requirements-capture phase.

This means that plans created at the start of the project cannot be more than partially informed guesswork. Given that the nature of complex systems make them particularly sensitive to changes in small details, a plan for a complex system created with incomplete knowledge must perforce be a recipe for failure by degrees.

Does this really make up-front planning redundant? Is there a way to make the synthesis of the client and technical knowledge more efficient, perhaps by using advanced planning software? If this could be achieved then perhaps the planners could write effective up-front specifications that lead to accurate long-term costings and schedules.

Unfortunately there is another, more fundamental reason why detailed specifications must fail - regardless of their precision.

A specification is a description that attempts to outline features and functions in a natural language such as English. Yet software is actually written in the very precise syntax of a machine language. Engineers know that only computer code can truly express the details of a software vision, a natural language specification cannot be logically accurate enough. This means that natural language specifications must leave many implementation details open to interpretation forcing the engineer to skilfully choose from a set of implied options. Yet complex systems are sensitive to precisely these sorts of technical details, different choices will lead to different systems and, as often as not, unfulfilled client expectations.

Therefore, even where a specification guesses correctly, the natural language descriptions will contain subtle choices and hidden contradictions. It is only when the fuzzy language of the specification is transformed into the precise reality of the code that these choices and contradictions become apparent.

This leads to a profound truth about the nature of specifications: Greater precision does not lead to greater control. Instead the greater the precision the more varied and subtle the choices and contradictions become.

Planning For Success

Understanding these fundamental flaws at the heart of traditional software delivery, many forward looking managers and engineers are now moving towards a new project control methodology. In contrast to up-front or predictive planning this new methodology uses repeated bursts of short-term adaptive planning.

Agile Software Development throws out long-term planning and with it the traditional concept of a specification. Instead agile projects start with everybody discussing and sharing a simple vision of the end product. The vision is really no more than a mission statement that, at this early stage, explicitly removes the need for engineers to fully understand the business and for the client to fully understand the technology.

This means that an agile project can get started almost straight away, with the absolute minimum of requirements-capture. Instead of a long, costly and ultimately self-defeating planning phase, the engineers get to work building the first version (iteration) of what will become a rolling beta. Armed with a very short term plan covering just one or two weeks of work, the engineers build the first iteration and deliver it to the client for discussion and criticism. The rolling-beta is still only a sketch, an outline of the most important functions and how they might fit together. Mistakes and incorrect assumptions will have been made, indeed given the knowledge gap they cannot be avoided, but the mistakes are identified and quickly eliminated as the rolling-beta is regularly assessed by the client and engineers in close collaboration.

Once the first iteration is signed-off then the process begins again, a new short term plan is created and work begins on the second iteration. This iterative development continues and as the knowledge gap closes so the requirements and hence the software become ever more detailed and coherent.

Embracing Function Creep

As this hands-on process continues the client comes to properly understand the technical environment, what is expensive and what is possible, and as their knowledge grows so they begin to see new possibilities.

Clients changing their minds or adding new features during development is traditionally called function creep and remains the enemy of traditional planners. Yet to suppress this is to deny that clients can learn and modify their expectations as they see their software progressing. Rather than trying to ignore the client's input, the agile iterative process welcomes it as new and valuable knowledge.

Thus the client is encouraged to re-specify their product as it is being written. This is the ultimate guarantee that, in the end, the client will be satisfied. It is hard for a client to be surprised or disappointed with their software if they have played an active part in designing and deciding the goals at each iteration.

Equally, as the iterative process progresses the engineers will also come to a genuine understanding of the business. This allows the engineers to discuss the business processes with the client in a manner that allows a useful exchange of knowledge to take place. Questions to the client can be appropriately framed using the business terminology both the client and the engineers now share. Since the frequent iterations and short-term planning means that any incorrect business assumptions are quickly discovered, such mistakes can be corrected with the minimum of effort.

Engineers too, once they come to a genuine understanding of the business, can start to usefully contribute to the re-specification of the rolling-beta. New ideas and inspirations, whatever their source, can be welcomed, discussed and possibly incorporated as the software adapts over time.

Job Satisfaction

In summary, an agile software system evolves under the twin constraints of the client's business vision and the engineering environment's technical limitations. As the client and engineers come to a mutual understanding so new ideas bubble up and are incorporated as bad old ideas are identified and discarded. Before starting each iteration everybody discusses, negotiates and quickly reaches an understanding of what is actually required to fulfil the next set of short-term goals.

Thus an agile system organically grows its natural complexity out of a fundamental simplicity. As a result there are fewer surprises, the project risks are minimised and the client is more likely to get software that works.