Sunday, July 27, 2014

After the fact - a new role for function points?

Just the other day I was chatting with a friend about the place of function points in software development.

While they are traditionally used as an approach to estimate the effort required to build a system, from my point of view this role has changed with the current prevalence of lean and agile methods.

Using function points to estimate effort in new project work

... is (IMHO) a difficult feat because there would be a serious amount of functional decomposition necessary which in turn would require extensive analysis which in itself would be a serious step towards BDUF. Furthermore it would require so much effort that a separate project would be necessary to get the funding for the work.

And this approach is neither very agile nor very lean. It does not address the knowledge gain – both about what the project is about and on how to go about the solution – during the project.

Making work between projects comparable with function points

... on the other hand seems quite feasible to me. Usually, after we have finished the work (and of course in an agile environment we have finished, really finished, at least some work after the first iteration) we do have measurable building blocks that can easily be measured and counted (in functionpoints).

Using function points to plan big projects

... is not such a good idea from my point of view. (Even when it is considered viable because epics seem too hard to plan with planning poker)
In my opinion using function point analyses for up-front planning is almost dangerous – for the aforementioned reasons of extensive up-front work (and implicitly commitment to solutions).
If estimating epics seems too hard there might be other reasons involved that would still be valid if function point analysis would be used. But with the kind of up-front analysis that often seems appropriate for function point analysis these points might become hidden behind too much detail. The problem with planning poker is of course that the "consensus amongst expert" that has been derived from wideband delphi depends on a certain level of detail and upon a sufficient number of available experts from the different areas of expertise.

In the end, all that planning poker does is condensing the formal approach of wideband delphi into a seemingly more informal approach based on verbal communication. Establishing a basis for estimation and installing a cross-functional group of experts is still necessary! Even if the process that can take weeks in wideband delphi is condensed to a relatively short interactive meeting. Such a group could – in a software development setting – consist of e.g. marketing, software-architects, database engineers, ux-specialists, testers, quality assurance, technical writers, and so on

If the requirements can‘t be estimated well enough, that problem is often rooted in too little experience in the domain, or missing decomposition into manageable – and understandable – units, for example stories on the next (more concrete) level of abstraction.
While function point analysis also enforces the decomposition of the requirements, it tends to drive the analysis towards a mindset of "What can be counted in function point analysis?" instead of a mindset of "What is a capability of the system that can actually be leveraged by an end-user and that I can estimate?" Therefore there is a genuine risk of trying to operate in the solution space before even the problem space has been explored well enough.

So, instead of opting for function point analysis when epics seem un-estimatable, I would rather suggest to break the epics down in such a form that a solid comparison with things that have been done before is possible. One approach to do this might be to at least name the stories on the next less abstract level. And additionally walk through a couple of user journeys.

Using planning poker to plan small increments of existing software

... on the other hand is a surprisingly good idea in my book.

The questions that have to be answered to get to the function point revolve around things like:

  • How many (already existing!) screens have to be modified and how complex are they?
  • How many tables are involved?
    (The data model and its physical representation usually also exist with existing, running software)
  • How many interface have to be touched? Are they inbound or outbound?
    Remember: The system is running already, so the interfaces are either already in place or an explicit part of the requirement.
  • How many functional blocks of what complexity are affected?

All of these issues are cleanly cut when adding small, well-defined requirements to an already existing system and thus can be counted quite easily. When implementing completely new epics, trying to put numbers to these issues requires at least the creation (a.k.a. design) of a conceptual data model and a functional decomposition of the requirements – things you would rather like to do during the discovery of the system, during and alongside the implementation.

My conclusion:
Function points can be really ugly critters – but used to the right ends they can be a tremendously efficient means.

'til next time
  Michael Mahlberg

Sunday, July 13, 2014

Testing: How to get the data into the system

Even though the correct term for a lot of the “testing” going on would be verification let‘s just stick with “testing” in the titles for the time being...

General verification workflow

The general way to verify that a piece of software does what it is meant to do seems quite simple:

  • Formulate the desired outcome for a defined series of actions
  • Put the system in a known state (or the sub-system or the “unit” – depending on your testing goal)
  • Execute the aforementioned defined actions
  • Verify that that the desired outcome is actually achieved
  • [Optional] Clean up the systems [1]

While this process sounds simple enough, there are enough pitfalls hidden in these few steps to have spawned a whole industry and produce dozens of books.

In this post I want to tackle a very specific aspect – the part where the system is put into a “known state”.

Putting the system into a known state might involve several – more or less complex – actions. Nowadays, where it's possible to automate and orchestrate the whole creation and setup of machines with tools like vagrant and puppet it is even possible to set up the whole environment programmatically.

You might not want to that for each unit test, which brings us to the question of when to setup what wich I will try to address in some future post.

The problem with the data

However big or small the test-setup is, one thing that is very hard to avoid is providing data.

The state of the system (including data) if often called a fixture and having those fixtures – known states of the system with reliable, known data – is a fundamental prerequisite for any kind of serious testing - may it be manually or automated.

For any system of significant size if there are no fixtures, there is no way to tell if the system behaves as desired.

Getting the data into the system: Some options

In general there are three ways to get the data into the system

  • Save a known state of the data and import it into the system before the tests are run.
    In this scenario the important question is “which part of the data do I load at which time“ because the tests might of course interfere with each other and probably mess up the data – especially if they fail. Consider using this approach only in conjunction with proper setups before each test, amended by assertions and backed up by “on the fly” data-generation where necessary.
  • Create the data on the fly via the means of the system.
    Typically for acceptance tests this means UI-interaction – probably not the way you want to go if you have to run hundreds of tests. Consider implementing an interface, that can be accessed programmatically from outside the system, that uses the same internal mechanisms for data creation as the rest of the software.
  • Create the data on the fly directly (via the datastore layer).
    This approach has the tempting property that it can be extremely fast and can be implementing without designing the system under test specifically for testability. The huge problem with this approach is that it duplicates knowledge (or assumptions) about the systems internal structures and concepts – a thing that we usually try to avoid. Consider just not using this approach!

So, do you actually have fixtures? And how do you get to your data?

’til next time
  Michael Mahlberg


[1]

(One can either put the effort in after the test or in the setup of the test - or split the effort between the two places, but the effort to make sure that the system is in the correct state always has to go into the setup. Cleaning up after the test can help a lot in terms of performance and ramp-up time, but it can not serve a substitute for a thorough setup.

Friday, July 04, 2014

How to get to the value(s)?

Values! That's what the Agile Manifesto – and hence the whole agile software development movement – is all about! Or is it?

Waterfall

Do you know how Waterfall first came into life? There are many stories, but a lot of them start with an article by Dr. Winston W. Royce, presented at the 1970 WESCON.

There the classical waterfall approach is lined out on the first few sentences and pictures.

And for many a reader that was enough!

And so they missed, that he continued by stating, that while he believed in the principal idea [of doing analysis and design prior to programming] he thought the implementation to be „risky and inviting failure“. He used the remainder of the paper to line out a more iterative approach, which he recommended to his readers. If only they had read thus far...

So the straw man, that Royce set up just to knock him down, has become the foundation for the waterfall model as we know it, because (some? most?) people didn't bother reading far enough.

Same in agile

The funny thing I see nowadays is that the same starts to happen with the agile manifesto.

In a lot of conversations the agile manifesto seems to have been reduced to the underlying values. Which are handily presented on the first page of the manifesto. It is funny how the room falls silent very often when I start to ask about the second page of the manifesto with the principles...

Seems like a lot of people don't look further than the first page.

How to get the values across

To me, the fact that there is (way) more to agile than only the four value statements has always been a relief – after all, up until now, nobody has found a way to install values into someones brain directly. At least not to my knowledge.

From what I understood from the behavioral psychologists, with whom I talked about the matter, the accepted way to transport values is to let the target audience experience the values through practices.

Children learn about values from the way other people act – not by what others say is right. (Claiming “Chocolate is bad for you” while munching away on a mousse au chocolat usually doesn't work to well with children.)

And – as Uncle Bob pointed out – we also infer the values a culture holds high from the behavior we can observe in that culture.

A culture in this case can be as local as a single software development team.

Thus, when everybody on the team claims “We believe in high quality software” but they cut corners every time they have to deliver, one might infer that they don't really see value in high quality software. (Which would be a pity, since – in my opinion – Quick and Dirty is very non-Agile!)

Or, when the whole team claims to love tests but none get ever written, one might infer that "testing" is in fact not really in their value set.

The opposite is not quite as simple – if we observe a team that consistently writes test we would probably infer that they hold testing high, while in fact they might just be scared by their QA department.

Nonetheless, as long as there is no way to ‘inject’ values directly, just following the practices for a while still seems to be a very good way to get at least closer to the values.

While I have seen many a project fail where every member could quote all the values from the Agile Manifesto I have not yet seen a project that adhered to all the principles and still failed.

Although I have to admit that it is a lot harder to actually follow the concrete principles than to quote the values.

Try giving the second page of the Agile Manifesto a chance – it might be worth it!

‘til next time
  
Michael Mahlberg