Friday, December 09, 2011

Metaphors in software

Many Extreme Programming practices are now considered industry standard or at the very least 'worthy'.
The poor cousin of these is perhaps the idea of System Metaphor. The intention behind XP's System Metaphor practice is to have a common "story that everyone – customers, programmers, and managers - can tell about how the system works." This story often is told in the language of the problem domain, there is no metaphor. (In XP terms this is known as a naïve metaphor)

It was my guess that most systems are developed without a system metaphor (or a naïve one). So I thought I'd do an in depth study.

Less than 20% of devs use the system metaphor. The evidence is damming!

So do we even care about the system metaphor as a practice. If 80% of developers haven't used a system metaphor why would you even bother? Are developers missing a trick?
A world without metaphors.
Metaphors are everywhere and in software land they are impossible to exist without. Why? Because the mechanics of what is actually happening is so far removed from what the typical user can communicate we need to abstract to understand anything. As a software dev, I don't want to have the overhead thinking about how the electrons fly around the computers, copper wires, through the air and I probably don't need to know too much how data gets from my HD into memory for example. Developers think at one level of abstraction, their clients work at another. The divide is there.. too often do stakeholders turn around and complain about the 'jargon' the developers use.

So to allow the computer scientists to communicate with the devs they cooked up data structures. Think about the humble stack.
The stack datatype.

A tangible stack  is a set of objects placed on top of each other. To get to an item you need to remove the items above it. The last item you put on the stack is the first item you need to deal with. Now the CS boffins who were dreaming up datastructures came up with a collection of memory locations that were also to be last one in, first out. They could have called it anything (like particle physicists up, down, strange, charm, anyone?) and I bet in CS world there were some scientists that just didn't get the Stack straight away? So what's the best way of explaining a concept? Describe it in terms that explainee underderstands.
Towers of Hanoi in the cooker


A Stack data type is not vertically aligned. It has no physical dimensions. There are holes in the metaphor. As far as I know most "real" stacks don't pop either but the concept is good enough to convey the concept of how the data structure is supposed to function.

The stack is just one example and, to be fair, a stack is a very simple concept. But look around the computer/software world and metaphors are just flippin' everywhere.

In Design Patterns
  • repository
  • bridge
  • adapter
  • factory
  • decorator
  • flyweight
  • mediator
  • command
In Software Concepts
  • error
  • bug
  • loop
  • concrete class
  • file
  • queue
  • peek
  • poke
  • pointer
  • engine
  • layer
  • core
  • service
  • bus


In the os paradigm
  • file system
  • folders
  • recycle bin
  • window
  • lock
  • desktop
  • mail box
  • firewall
and that's just the first few. Human's are naturally disposed to use metaphor and simile to explain concepts. The relatively dry topic of software engineering is an ideal breeding ground for metaphors. Most concepts are abstract and have no physical tangibility - what choice do we have?  

These aren't the sort of systems I build though. I build systems for businesses that help them streamline the business process. My systems have complexity. How can metaphors help us to construct these systems?

A Common Understanding
Imagine you have two sets of people. The first set are a crack programming unit, the second are a group of domain experts. Both are knowledge specialists in their field. The project cannot fail. But when the two sets talk its like one group are from Mars and the other Venus.

You have the option of getting the Martians to speak Venusian or the Venusian's could learn Martian ways. The problem you may come across here is that it takes a lot to learn Martian well. And boy those Venusians lead complicated lives. They are completely alien to those Martians.

There's a third option: imagine if both Martians and Venusians have been observing Earthlings for a while then perhaps they could describe their problems using Earthling ideas and Earthling idioms.
 (Earthlings as we know are simple creatures with well defined funny-little habits).

But metaphors are alien to me?
I suppose its quite disconcerting to not plump for the naive metaphor. It seems easy... but is it?
Naive metaphors are a one way street. The dev team may potentially know nothing about the domain it is going to work with. The stakeholders hold all the cards.

In the short term it is probably easier for the delivery team to learn about the problem domain.
What can happen though is that there may be an incomplete transferral of knowledge to the delivery team. I suppose that this unlikely to be too much of a problem in an iterative project. Agile accepts that not everything will be right first time but that doesn't mean we should aim to close the iteration loops.

By introducing a metaphor the knowledge holders have to 'think' about how their domain is relates
to the metaphor. Processes that are 'second nature' need to be be deliberately revisited and described not only in terms of the business stakeholders but in a shared and a discovered paradigm.

This joint learning I feel has some value. It seems strange that this practice is not more popular.

Thursday, September 15, 2011

Thoughts on retro fitting tests.

Had an interesting talk with a colleague of mine some time back  on the topic of retrofitting tests to a system. I'm not a fan of retro fitting for the sake of it. The, now hopefully well documented, problem with TDD is that the 'test' bit is a side effect. Its practically a distraction to the design aspect of TDD. So when I get told to write 'unit' tests against an code base, I accept they are not going to be the same sort of unit tests as those that I would write when I'm doing test first.

So why do I have an issue?
When we retro fit unit tests, the best we can do is make observations of the current state of the system. This system may be subtly wrong, how can we know if we have no specification to verify against? At the unit level such detail may not be understood by the business. The scenario that the code was written under is most likely forgotten or more likely misunderstood. So when we write tests we merely observe and report, we cannot say that we understand.

I came across this issue, yesterday. My pair and I were refactoring some code and saw a redundant switch statement. looking at the system hollistically, it was obvious that the code didn't belong so we whipped it out, ran the tests and got red.

The production code was something like this:


        public string GetContent(int fooId)
        {
            var content = string.Empty;

            switch (fooId)
            {
                case 4:
                    return _contentModule.GetContent();
            }

            return content;
        }

We knew that fooId would only ever be 4, (cough magic number) and so could simplify this method down but when we ran the test, we failed on:

public void Should_return_stringEmpty_when_fooId_not_equal_to_4()

When I looked at the log in source control it revealed that the tests had been committed 3 months after production code had. I'm guessing then that the test author had merely looked at the code and applied a test based on what the code actually does rather than the intent.

This retro fitting of tests can therefore be dangerous. If we trust the tests (and we should) then retro fitting can be misleading and it is against the TDD ethos. Remember kids you should value specifying over verifying.


Saturday, June 18, 2011

Jack and Jill rewrite a system and then go on a dig

A software system is like a bucket of water.
Photo by Tobias Schlitt

But other than just the code and the persistence of data there's knowledge too. Knowledge about the system that is sometimes stored in documents, sometimes stored in tests but always stored with people.

I've just been watching Scott Bellware's "Beyond Agile" talk and he talks about software development of simply being knowledge transfer. A product sponsor transfers the knowledge to an analyst, who transfers to the business analyst, who transfers to the develop to tester into a product.

When we rewrite systems we will leak this knowledge. Exactly if you tried to pour a full bucket of water into another bucket or water. Chances are you'll spill a bit. Chances are the documentation is not comprehensive. Chances are the tests don't convey the meaning of the application well enough. Chances are the developers have forgotten what they were doing when they wrote that feature. Chances are the product sponsor has forgotten exactly why they wanted it 'that' way.

When system stakeholders leave the project the bucket springs a leak. You have lost some of that knowledge. Even with comprehensive documentation, tests and a full handover.  And your bucket is probably not a bucket any more. What once was a bucket is now probably a kettle that does credit card payments... and its rusty but only on a Thursday.

So how do I summarise this leaky abstraction? (go on groan)

A software system is is more than just code. It is the users, the stakeholders, the delivery team and the state of the system. When rewriting the system it is going to be impossible to relive the journey of knowledge accumulation that the team went through to get to the system that you are rewriting. Even though there will be a set of artefacts you will have discovered to reconstruct the system this will not be the full picture. That's ok because you only need to do what your stakeholder's need now. Reconstructing 100% of the old system when no one knows what 100% is wasteful (and impossible)

Rewriting software is like an archaeological dig.
Photo by Ben Garney

Except rather than rebuilding the Roman fort exactly we extract what we want from it and make it better.

Monday, June 13, 2011

DDDSW 3 - RE: the ReWrite session.

I went to DDDSW on Saturday( a conference based around Microsoft technologies) specifically to see one session.

"Rewriting software is the single worst mistake you can make - apparently" by Phil Collins.


This subject is dear to my mind as I've been involved in many rewrite projects over the years with varying degrees of success. Its also a big part of my studies at the minute.

Phil is a confident speaker and this session is an experience report rather than a "How to" session. Some of the choices that Phil has made I disagree with. I want to share the reasons why I disagree, not to have a pop at Phil but just to get clear in my mind why I do.
 

Language choice
Phil was moving from an old tightly coupled legacy system and he had a free choice of languages. He quite rightly had an R & D session on a few languages. However the basis of the choice he made given in the talk was problems with case insensitivity. There are many factors for choosing a language, tooling support & flexibility are two that spring to mind but the issues that you have in the first few weeks you will soon forget and so case insensitivity is a minor distraction in my opinion. Choose a language and platform that is right for the application not just the rewrite project.

Screen by screen rewrite.
Why? Sure you get all the features but this is an excellent time to find out exactly what you do use and more importantly what you don't. I know this from first had experience. I joined a rewrite project late that had done exactly this. There were dozens and I mean dozens of text boxes and fields in the application that were not used any more. Whole screens were devoted to the parts of the business that just didn't exist any more.

Take the time to talk with your users about each screen. Try and come up with the story behind each feature. If your users don't need it. Don't rewrite it. This saves you time and gives you opportunity to remove cruft from your application.

To be fair Phil did share a story where he added more features based on user interaction. I'm assuming that he may not have blindly rewritten all features.

What Phil has got right:
 

Continuous integration.
Phil lauds this and has a very funky set of build radiators. (BVC of the build status) and I agree. CI is instant feedback and build radiators are key to that feedback.

Slowly, carefully.
Phil tells his devs to go slowly. Any project should not be done a break neck speed. Otherwise you are under pressure to cut corners. If you are just in a race to rewrite a function you will make mistakes. A rewrite is an opportunity to cut tech debt. Don't make the mistake of adding more.

More Whiteboards
... I love just scribbling down stuff on white boards too. It is simply a great way to share ideas and concepts.

What I would do - ascertain the vision
Assuming that the rewrite has to be done. I would try and extract the vision for the project. Why is it that we have to do this rewrite? What are we trying to get out of the the rewrite? It may be to improve usability or performance. It may be for legal reasons. It may be the the cost of change is high in the legacy application or that the technology is just not supported. 


Observe system usage
Given we now have a vision, a set of woolly statements that we are heading towards we can now look at the current application and its features. Look at each of the features and see if they still align with how the users are using them. A good idea here is to sit with the users and observe their daily activity. Its an eye opener to see how your application is actually being used and its often not as designed. Of course you will not all of the usage in just one session but you will certainly get the core. 

Speak with your users
Next step is to interview your users find out what they think their processes are. At this point you should pick up things they do that you didn't pick up on the initial sweep. This is a great opportunity to find clunky procedure where you can improve upon. Many times I've done this and found hidden manual process too.


Collaborate to write automated specifications
Now that you have a vision, an idea of how the users are using the system and the processes they are trying to follow you can begin to look at the features the system needs to achieve the processes. These features can be broken down into user stories and then you can provide examples to specify the features. A tool like Cucumber (or SpecFlow) can work well in situations like this. Now that your specs are automated you can get that cosy feedback loop when you check with your CI system.


Iterate
Split your work up into chunks. Stop then reflect. Are we missing something? If so add it. Show your work to your users regularly. Rollout to them as soon as possible and as much as possible.

Summary
Phil's got some good ideas but his focus is different from mine. Check out his blog http://soyouthinkyouneedtorewrite.com/ for some views on rewrites and if he does his talk near you go see it. But also don't rewrite feature for feature. Take the opportunity to prune dead features and improve existing ones.

Sunday, June 05, 2011

Actually all we care about.

Mary Poppendeick is a legend. She's got a fantastic paper about team rewarding, But I've just been reading her paper about lean thinking. "Wow John. Lean's so old now its in a nursing home." but hey I've just Mary's principles of lean thinking. Its nothing new if you aren't  on the agile slug trail but i kinda missed it.

So for everyone else;

here's what you are doing.

YOU ARE WASTING YOUR TIME
Mary and some guy from japan have noticed that:


The Seven Wastes of Software Development
Overproduction = Extra Features
Inventory = Requirements
Extra Processing Steps = Extra Steps
Motion = Finding Information
Defects = Defects Not Caught by Tests
Waiting = Waiting, Including Customers
Transportation = Handoffs

Can you see this in your organisation?


Nobody cares about agile but you

A poem :
I remember when I was concerned when agile was something that 'hippies did' (& me). I remember when people thought that agile meant no docs. I vaguely remember the acceptance. i don't remember the corporate adoption. I remember the scrum certification. I remember that now that the literati accept agile as a given. I remember moving jobs to an agile organisation. I remember how nothing had changed. I remember how this was not agile. I remember how we can do better. I remember the alt.net implosion. I remember the bright days of dev div. I remember the betrayal of people's hard work.  I remember getting my team to think about objects. I remember to get them to think about tests. I remember just writing code. I remember loving results. I remember loving the problem I remember  loving quality. I remember evolving design. I remember just wanting to make things better. I remember the stand ups, I remember the cynicism. I remember the hope. I remember the break through. I  try. We improve. we improve.

Saturday, June 04, 2011

ORMs

There has been a backlash against ORMs. I missed it. While you were all messing around with Nhibernate. I avoided it. Why? because I can always write database queries much easier in SQL.

Always.

My mappings were much better when I mapped in with NH. I've always worked in smaller teams and much of the code has been written procedurally.

There I said it.

But this is why I like ORMs... because over several years into .net people are still writing procedural code. For the first time people were forced to think about domain objects.

and now we have EF. Ef is now getting people to think code first. (Wow welcome to the 1980s)

and now we have stuff simple.data.

ORMs were supposed to remove the impedance mistmatch between tables and code. This has only been solvee if your forward generate.

For those of us with legacy systems  all this backwards mapping is too much of a ball ache.
minimilist web frameworks don;t help and this is why I'm writing crap like SQl.data.
We don't need to worry what to call our call because its the name of the entity. we don't need to worry about joins because its the best SQL we can write.

Lets not worry about the mismatch because we can do what the devs did in the 80s. We write the best DSL to query data evar. S Q L.

Tuesday, May 24, 2011

Research in re-writing systems.

I'm doing some research into re-writing systems with automated tests and I'm trying to get a feel for what the status quo is in this area.

If you've got a spare couple of minutes to fill in this short survey.  I'd very much appreciate it.

My survey

If you don't want to do it.... well baby jesus will kill a kitten or something.

Saturday, April 23, 2011

Test coverage or specification

Whilst writing up some notes I keep on changing between tests that cover something (as in test coverage) and tests that specify behaviour. It strikes me that these two concepts are orthogonal to one another. The former is an after thought, something that aids regression, the latter a pre-requisite to writing the code.

The funny thing is that I hear about test coverage but we never talk about specification coverage.

TDD for abstract classes

I don't seem to use abstract classes much anymore. The tendency is for me to prefer interfaces over abstract classes as I've been inculcated with mantra 'composition over inheritance'. Where common behaviour is observed though it just feels natural to shove that behaviour in , I dunno, something like a superclass.
I like the template pattern too. So I do use abstract classes occasionally.

Concerning testing I have tended to duplicate testing this behaviour in the past. This is something that has not settled well. Duplication in testing may be slightly less repugnant than duplication in production code but there must be a valid reason.

How should we tackle testing abstract methods then?
Here are a couple of strategies.

Firstly here is my class

An abstract test class
We can put the tests for the behaviour into an abstract test class using a protected but uninitialised instance member. In our concrete sub class test class we inherit from the abstract test class and instantiate the sub class variable When we run the tests the superclass tests run with the subclasses.
If the subclass has more methods that need to be tested then a clumsy casting of the concrete type over the instance variable has to performed each time we test. Alternatively a separate member of the subclass needs to be setup as well the abstract one.

An mock of the abstract class
Alternatively we can use a mocking framework to mock the abstract class and run tests on that.

A fake concrete class
If the feeling of mocking an abstract class doesn't sit well an slight variation would be to create a fake concrete instance and perform the tests on that.

If I was using Mspec today I could has used its behaviours functionality which may have been more simple but I wanted to see from a TDD point of view what options I had.

Do you use any other strategies for testing abstract classes?

All the code if you want it

Friday, April 22, 2011

TDD Strategies

Just scribbling some notes thought this looked bloggable:


Test Driven-development can be approached using at least two strategies, a mock based approach or a state based approach (Martin Fowler calls this a classicist approach http://martinfowler.com/articles/mocksArentStubs.html). When presented with the initial idea, TDD seems simple; write the test to verify the behaviour code you are about to write. A common example of this is a calculator.

Say we have a calculator class that has the following method:

public void int Add(int integer1, int integer2)

We may write a test to verify this works first;

public void Should_Sum_inputs_together()
{
     var result = new Calculator().Add(1,4);
     Assert.AreEqual(5,result);
}

This is probably the defacto example of a state based testing approach. An action is performed on a class, and then its state is interrogated or a return value is verified. In many introductions to TDD you will find trivial examples such as this one. When test-driving code ‘in the wild’ you soon find that certain things become more complex very quickly.
Mock based TDD
Continuing on with the calculator example we can see where state based testing alone may not be enough.

Say we move a little closer to the GUI and have a ButtonInterface class that implements the interface,

IButtonInterface
{
     Button Zero;
     Button One;

     Button Nine;
     Button Add;
     Button Equals;
     void PressButton(Button button);
}

Our test may be :
public void Should_Sum_inputs_together()
{
     //mock a calculator -- I’m using pseudo code based on Moq
     var mock = new Mock();
     ICalculator calc = mock.Object;
     IButtonInterface calcUI = new ButtonInterface(calc);
     calcUI.PressButton(this.One);
     calcUI.PressButton(this.Add);
     calcUI.PressButton(this.Four);
     calcUI.PressButton(this.Equals);
     mock.Verify(c => c.Add(1,4));     
}

When testing the button interface we do not need to know how our class that does the addition manages the task. It is not the responsibility of the button interface to do so. However it is the responsibility to take the button inputs and send the appropriate messages to the calculator.
This is where mock based testing comes in. If the ICalculator interface hasn’t been written yet or perhaps there is a cost in setting up an ICalculator, for example it may be dependant on a filesystem or a database. In these cases tests may run slowly due to getting the dependancy into the correct state and then resetting it afterwards. In these examples and the more striking fact the the ButtonInterface doesn’t care all we need to verify is that the correct message has been sent to the correct interface.

Saturday, March 19, 2011

Getting Better Steadily

Tim Ottinger, asked me to write a few words for him about the Agile in a Flash cards he co-authored. I bought these recently to help us reboot the team into something more agile. The cards are great, concise pieces of information that we reflect on each morning. They started off on my desk but it was fantastic to see them missing from my desk and being talked about by the other developers. This hasn't happened with books or blog posts because the information in a book is usually buried 100s of pages in and 19" monitors are well, just not mobile enough. The cards invite being passed around and collaborated with. It's wonderful to see that happening.

Anyway if you want to, read my story on the Agile in a Flash blog

Monday, February 28, 2011

Design contradictions between test levels

I thought I'd spend tonight finishing off the BDD Chessboard Kata exercise I started with Jim McDonald at an XP Manchester coding dojo. Jim is a fan of emergent design, whereas I advocate a little design up front.
For those who don't know the kata. There are a number of predefined specifications written in the Gherkin DSL based around the movement of 2 chess pieces on a chessboard.
This is the feature we were implementing.

Feature: Boundaries of the board.
In order to obey the rules of Chess
As a player
I want to be prevented from entering moves outside the boundary of the board.

Scenario: Pawn at top.
Given I have a White Pawn at A8
And I have a Black Knight at A1
When I move the Pawn to A9
Then I should be warned of an illegal move message

Scenario: Knight heads off board
Given I have a Black Knight at G8
And I have a White Pawn at A1
And I move the Pawn to A2
When I move the Knight to I7
Then I should be warned of an illegal move message
We've completed most of the steps on each of the scenarios using a location class implementing


    public interface ILocation
    {
        void MoveTo(string location);
    }

Each time we touched a class we've covered it with mspec tests.

    [Subject(typeof(Location))]
    public class When_creating_a_location_with_an_invalid_position
    {

        Because of = () =>; exception = Catch.Exception(() =>; new Location("Z22"));

        Behaves_like<AnInvalidLocation>; its_in_the_wrong_location;

        protected static Exception exception;
    }

    [Subject(typeof(Location))]
    public class When_moving_a_piece_to_invalid_location
    {
        Establish that = () =>; location = new Location("A1");
        Because of = () =>; exception = Catch.Exception( ()=>; location.MoveTo("X2"));
        Behaves_like<AnInvalidLocation> an_invalid_move; 

        protected static Exception exception;
        static ILocation location;
    }
I've DRYed out (a little) the invalid move behaviour in my Location.

Now that I've come to implement my final step, "Then I should be warned of an illegal move message".
I suddenly need to implement the invalid move behaviour. (Even though the WHEN is where the invalid move occurs I don't need to write the behaviour yet).

So this is where the emergent design has lead me to. If I am to follow the YAGNI or LRM principles I don't want do anything thing more than display a message. But If I change the behaviour of my location classes' MoveTo method to return a message, then the good design principles that my class level specifications has lead me too are at odds with the demands of the specification passing.

I am trying not to introduce another class to handle game messaging but by not doing so I undo my hard work at the class level. Is this the last responsible moment?



Sunday, February 20, 2011

Thoughts on Xp Manchester BDD session

Last week I ran a session on BDD at XP Manchester. I'm writing up some notes on it so I thought I'd share them with the wider world.

A lot of the reading I've done on BDD has mainly focused on the tooling. This to a certain extent is unsurprising as inception of BDD was about getting people out of the 'test' mindset.  But as Dan North say's "Don't lose sight of the bigger picture or you'll miss all the good stuff". What is the bigger picture? Well that's a good question. Even though there is a wikipedia entry (more on this later), and a seemingly abandoned wiki driven website  it isn't completely clear what exactly BDD is. I thought that without a clear definition that it would be a useful exercise to find out what other practitioners thought.

Cue the XP Manchester session. I opened up with a very brief summary of what I thought people knew about BDD, the emphasis was on the tooling. (For the latecomers those were those magical 2 minutes).
Following the brief synopsis I wanted people's definitions of BDD. I got the group to write in brief their thoughts.

Attributes of BDD
In case you can't read my scrawl, the board read:

  • It helps requirements
  • It is automating acceptance criteria
  • It is about communication
  • It is a DSL
  • It is a process
  • It is a specification
  • It is a pull based system approach
  • It links features and functionality
The group seemed to focus on the automated acceptance testing side of it. Even though BDD as a process was mentioned the majority of the group seemed to lean towards BDD as an acceptance framework.

What I wanted to do was for the group to define BDD, the methodology. 




In doing the research for the session, I asked a question on the BDD google group. Its a good thread, well worth a read. Liz Keogh pointed me in the direction of a BDD definition (which can be found on wikipedia).

BDD is a second-generation, outside-in, pull-based, multiple-stakeholder, multiple-scale, high-automation, agile methodology. It describes a cycle of interactions with well-defined outputs, resulting in the delivery of working, tested software that matters.  
Discussion on definition
There's a lot in that. We spent the most of the rest of the session discussing the different parts of those 2 sentences. The trouble with interactive sessions is that the discussion can drift in other directions or falter.  And this part of the session wasn't quite how I'd imagined it would go. Because I had shown the group the definition the discussion lead down that path rather than creating something new.

So one the main discussion points were around Outside-in. The the group consensus is that we work from what the user wants, implement that from the UI afterwards. I touched on the analysis aspects of outside-in. That is when we do analysis on the domain, we start from what we know and then we delve down into the problem space until we have an adequate understanding. 

The other point which I feel which got some interest was that BDD encompasses all stakeholders. Not just the project sponsors, or the users but developers, infrastructure, everyone. We posticulated that this would throw up a lot more scenarios. Following on from this we discussed how that classes are microsystems, and that unit tests (or context specfications in our case) are just acceptance tests at a micro level.

Following the discussion. I wanted people to code so I devised a simple BDD exercise, supplying gherkin scenarios. I'm doing an MSc at the moment so I intend to analyse the results in my thesis.

If anybody fancies doing the exercise I'll post a follow-up blog on where it is and how you can help me.

Slides are below:

Sunday, January 16, 2011

Help I'm a Windows developer get me out of here! (Or The Escape from Vim)

Git is a great product. When we thought the problem of source control had been solved, Linus Torvalds came up and looked at the problem and made a superior product to the tools I had began to take for granted. However Linus didn't design Git with windows developers in mind. We are used to rich user interfaces where as Git's command line interface is where its strength lies. In fact Gitk (the GUI client for Git) is an abomination and I suspect a major reason for Git's adoption rate to not be much higher. I use msysgit through the Bash shell and everything is rosey, normally. Sometimes I get myself into a pickle.

Git is fussy, you have to commit with a message. I usually use the message flag (git commit -m "commit message") so I don't have to drop into the frankly alien Vim. On occasion I forget so I thought I'd document how to get out of it.

tl:dr - just get me out of here!

You start of in a screen with lots of garish colours and it seems that none of your keystrokes do anything. Even typing sensible stuff like 'quit' or 'exit' just doesn't work! No doubt you are completely exasperated. What's happened is that Git has started the Vim editor and you are in COMMAND mode. Yes Vim doesn't let you edit straight away. You have to tell it too. What you want is to put Vim into INSERT mode. This will allow you to start editing.

Press i to enter INSERT mode, You can tell you are in INSERT mode by two thing. Firstly you will see -- INSERT -- in the bottom left of the console window and secondly when you depress an alphanumeric key the value will actually appear in the editor! You can now use the keyboard to type the commit message.

When you have finished the message you then will want to either save the message or abandon you changes. To do this you need to re-enter COMMAND mode. Press ESC to do this.


To save changes Unix people have made it simple for us Windows types. On a QUERTY keyboard you simply press the key above S, for save. Its obvious! But before you do that just press the colon key. So that's :w to save. In reality you will just want to save and exit at the same time. Again to exit you use simply use the key that's two to the left, the q key. So you can either chain the Save and Exit :wq or just Exit :q.


Summary


So there you have it. To escape Vim and commit your changes do the following.
  1. press i to enter insert mode 
  2. write commit message
  3. press ESC to return to command mode
  4. type :wq to save and quit (think write and quit) OR just quit :!q
Hope this helps someone.

Saturday, January 15, 2011

Xp Man XL

I think it was probably 5 years ago that I read Kent Beck's XP explained. The book had a profound effect on me, not because there was anything radical but more the fact that the ideas contained just seemed to resonate with me. 5 years have passed and Extreme Programming kind of went off the radar. When people thought of Agile then generally meant Scrum and many of the practices I associated with XP were deemed superfluous or perhaps too extreme.


However over the last 7 months or so Manchester has started its own XP group and I believe that interest is growing again. As well as monthly meetings at the ever accommodating MadLab, the group organises short guerilla coding dojos and today we had a code retreat, an extra large XP Manchester if you will.

The day was split in to 2 halves. In the morning we had 3 sessions of Conway's Game of Life and in the afternoon we tackled 3 sessions for the CheckOut kata. Time passed surprisingly quickly and both sessions were interesting. The group dynamic was mainly .Net developers, then Python, a Ruby dev and a Java dev.


Conway's Game of Life is a simple simulation of cellular generation. Cell lifetimes are governed by simple rules and so the problem lends itself easily to any language. The group was split into pairs and tasked with finishing the problem. We then swapped pairs and started the problem again, encouraged to approach the problem in a different way. I worked with C# & MsTest, then C#/MSpec, then Python and its unit test framework.

What I took from the retrospectives was the diversity of the approaches to the relatively simple problem. Whereas other groups kept on thinking in terms of grids and cells, my pairs concentrated on the behaviour. Some groups gave cells a tolerance, there were different state models. It just goes to show how we interpret language differently.


The afternoon CheckOut kata sessions were done in a different way. We had one session to finish the kata, one to refuctor and we were supposedly going to have a third to refactor it back. Fortunately we didn't have chance to do the third session and there were some great examples of terrible code that I certainly didn't want to touch with a barge pole.
Refuctoring is the process of taking a well-designed piece of code and, through a series of small, reversible changes, making it completely unmaintainable by anybody except yourself. Comprehensive regression testing guarantees that nobody will be any the wiser.   
Surprisingly it was difficult to deliberately write bad code intentionally. (yes I said intentionally) However once you got going it was a lot of fun. I worked with Mark Kirschstein and we had fun with xml, huge switch statements, dead code and much more. The stand out worst involved an ascii Dr. Spock and some brilliant Linq abuse.

The day was lots of fun and great to see some python written by a python developer. The group is full of people who love their trade and its a pleasure to write code with them. These events just give so much. Unless you are lucky its rare that get to work with developers that are willing to improve and share their skills. If you are interested in attending or contributing a session in the future please join the google group or follow the blog.

Thanks to everyone involved today!