Sunday, October 27, 2013

D is for Design ... and T for Verification

That is, if we talk about software development and the acronym TDD.
Often translated as Test Driven Development the acronym has been around since the late 1990s and especially the book by Kent Beck – rightfully at that time in my opinion – made people think of TDD as a development technique.
Now when you ask different people what "development" is, a lot of them might argue that it is about coding only – while others take a much broader view. This leads to many people (including Kent Beck if I recall correctly) pointing out that the second D in TDD is very strongly about the design aspect of development.
Since I came into closer contact with some of the proponents of Exploratory Testing (ET) a couple of years back I can't help but wonder if the whole term is misleading. Of course it is about a certain aspect of testing, but those engineers who "really" do testing in the hardware sector (e.g.: with cars, planes, elevators etc.) would consider such tests only as checks or verifications, which don't require a specialist in testing to perform them. After all, everything that is done in these "tests" is to check whether a certain assumption by the developer is met by the system. (Or if the axle distance is really what the designer specified, or if the elevator cable really is capable of holding the specified weight, or... you get the picture)

The (hardware) testers I know on the other hand do something different – and usually only leave a pile of scrap metal when they are through with their tests. They test how much weight is necessary to break the elevator cable, at which speed or lateral acceleration the torsion changes the distance of the axles (which usually doesn't bear to well with the car) and so on.

And TDD simply doesn't give us that kind of tests. Those tests that look for the unexpected or un-specified. And that is is why we still need a lot of non-automated (e.g. exploratory) testing.

So please bear in mind that in a true cross-functional team TDD has it's place in development but so does true testing know-how besides TDD.

Untill next time
  Michael Mahlberg

Sunday, October 13, 2013

From the customers perspective...

... a story starts and ends with the customer experience. There are no technical stories.

The technical temptation

From a technical point of view two stories A and B that share some basic functionality might look like an opportunity to extract the technical commonalities in a story C. What tends to happen is that you wind up with three stories A, B, C that can be easily "estimated" and have an obvious order of implementation - you start with implementing the technical commonality C and then, once that is stable you can do A and B on a tried and trusted foundation.
Nice, isn't it?
No - it is not!
Now you would have three stories and none of them conforms to the I (independent) of the INVEST concept for good stories. And what's worse: you end up with a story that is not testable by the user - story C. To add insult to injury, it is even your first story, further delaying the deployment of code that is useful to the end-users.

A solution?

For me the solution to this scenario is to really plan, estimate and analyze in four dimensions. Take the time into account. If you implement A first then A is much bigger than B because once it is finished B gets all the development effort from A for free – or at least the part that turns out to really be usable for both A and B. So B is actually bigger on the inside - unless you travel on a different timeline where B gets developed first. In that case A would be the story that gets the benefits.
In either case, the important message is:

Don't burden the customer with technical stories to get better story-sizing, just look at every story in the context of the moment when it gets implemented.

Value gets delivered only when new capabilities are available to end-users.

Untill next time
  Michael Mahlberg