Pivotaltracker plugin: Importance of Integration test

When I developed the pivotaltracker plugin to extract our backlogs (here the post), instead of create a mock application to use it, I gave the library to my colleague so he could integrate it in the real application that my frontend application would have used to retrieve all the backlogs.

Everything went well until the application tried to parse the stories in the final contract, then the application crashed miserably.
I and my colleague worked side by side:
The first thing I did was to run my test: all passed.
So I checked the response of the pivotaltracker API, maybe they had changed something: nope, the response of the API was exactly what I expected to receive.
So what happened?
Why my test didn’t represent correctly the real situation?

[Test]
public void It_should_be_possible_deserialize_stories_from_a_xml()
{
	var xml = "<stories type=\"array\">" +
                      "<story>" +
                         "<project_id type=\"integer\">000000</project_id>" +
                         "<name>A Name</name>" +
                      "</story>" +
                      "<story>...</story>" +
                    "</stories>";

	var target = new PivotalDeserializer("xml");
	var result = target.DeserializeStories(xml);

	Assert.IsTrue(result.Count() == 2);
	Assert.AreEqual("000000", result.FirstOrDefault().project_id);
	Assert.AreEqual("A Name", result.FirstOrDefault().name);
}

I mocked data in the wrong way, or more precisely, I used only the subset I was looking for in the complete response.
A sample of a complete response is below

 <iterations type="array">
      <iteration>
        ...
        <stories type="array">
          <story>...</story>
        </stories>
      </iteration>
 </iterations>

The principle of Unit Testing is to solve test in the easier, direct way possible and so, because of the subset that I copied, in my production code I passed the xml directly to the XmlSerializer instead to search inside it for the “<stories>” node.

The problem, trusting only on your unit test, is that if you make some bad assumption or, worst, make mistakes mocking data, you risk to get in troubles.

Integration Test and End to End test are perfect to be sure about the consistency of your application, or to test your application in a more real scenario and even if they cost more than Unit test, sometimes is better to invest time to create and maintain it.
They will repay you in the feature.

Also if you work with legacy code is the only way to secure your application before start a hard refactoring, but I want to write a specific post about this topic.

In general, as a personal rule, I perform integration test (happy path at minimum) for every part of my application that has contact or depends on external sources.

In this way:

  1. Everytime I run Integration test, if the source is changed I notice it immediately
  2. I’m sure at least of a happy path and if I’ve made some mistake mocking data in the Unit test I notice it immediately.

In that specific case, we worked directly on the final application, but when I came back to my pc and to my master solution I modified the mocked data with a real response and not a subset.

I run test: failed.

I put in my solution the code wrote with my colleague on his solution.

I run test: passed.

Just to double check I put the old mocked data to be sure that the new code searched correctly for the “<stories>” node even in a different xml structure.

I run test: passed.

To avoid problems in the future I planned to create Integration test that I should have create from the beginning.

5 thoughts on “Pivotaltracker plugin: Importance of Integration test

  1. As we discussed this evening.. I’m writing an interpreter for a programming language as a pet project and I went with a sort of end-to-end-tdd approach: I write a program in the guest language which doesn’t work on the interpreter, the test and then I solve it in the host language.

    This approach has been so far fantastic and specially suitable to my biggest problem: the fact that I don’t know where I want to go :). I reached a stage where 90% of the language constructs are supported, tested it and performance sucked. Profiling turned out the culprit was the scope implementation done through maps/dictionaries instead of a classic stack, I rewrote the whole big thing and I was able to reach a 100% passed test again without touching a test line. I’m now considering switching approach to interpretation completely (from executing the ASTs directly to a VM approach) and yet again I won’t need to change a line in tests. After my mind will be a little more clear on the technology choices and the implementation a little more stable, I will write real unit tests to verify isolated components in depth. But so far this approach was perfect.

    Sadly, I don’t know how much it’s applicable to other problems without considerable mocking (which, basically, defeat the purpose).

    • A very short answer:
      I think that here we have two different topics: the importance of Testing and TDD.
      You can write test without performing TDD.
      If you don’t perform TDD, it doesn’t mean that you are wasting your time writing test.
      End to end test or integration test are very powerful things, in some way any kind of test can drive your development, but for sure Unit test make possible TDD, but only if you start from test, if you write your Unit test after your code your development can’t be driven by test 🙂

      In your specific case it is possible that End to End tests drive your development and are enough to grant you a safe net.
      The problem, in a standard case, it is that you can’t be sure to perform all permutations in your end to end test, just because the complexity of your application makes impossible to be sure to have described and solved every case.

      Your application is very specific and its domain is very bounded, so I think it is possible to be safe only with End to End test and starting with this kind of test it is possibile that you are very close to a pure TDD.

      • I disagree on the fact that TDD implies unit tests – it implies a cycle of failing test – satisfy test – refactoring, not the fact that those tests must be “unit” ones.

        I totally agree on the fact that the application type is very very specific and uncommon (and fun!).

      • I understand what you mean, but it implies that every part of your application should be driven, in a End to End test you can have something similar to:

        var target = new AwesomeSolution();
        var result = target.SolveTheProblem();

        Assert.IsTrue(result.IsSolved);

        Assuming that your AwesomeSolution uses other Objects to solve the problem passing through many layers, something like:

        AwesomeFinder
        AwesomeRepository
        AwesomeResolver
        AwesomeConverter

        For sure you’ll have a complete solution driven by test, but every specific component it won’t, just because you didn’t written specific test for them and you can’t be really sure to have had a specific cycle of “failing test – satisfy test – refactoring” for every single component trusting only in the E-to-E.

        In your specific case It could be that all your components had the cycle, only writing the end-to-end.
        But in a common case I’m a bit skeptical.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s