Sunday, December 27, 2015

Keep your product in ship shape

Originally ship shape meant to adhere to the standards of tidyness and alertness necessary on seagoing vessels of the old times, where things tended to get turbulent from time to time.

Nowadays, with paradigms like ship it or the famous potentially shipable product increment another meaning emerges, at least in my perception.

Don't strip the headlights to get the tail lights working

When I was much younger than today I used to repair my cars myself. Partially out of interest, partially out of distrust towards the garages and partially because of financial restrictions. And of course, because in those days it was still possible to work on ones own car.

One of the things we used to do when some part didn't work was to exchange it with something that did. (duh!)

For example one time when the headlights where not working me and a friend tried a variety of things – after exchanging the bulbs and some tweaking with the wires we finally concentrated on the fuse which looked fine. But to make sure, we exchanged it with a fuse we knew to be in working order – the one that was wired into the headlight circuit.

And sure enough the taillight lit up. So we changed everything else back to the original state – bulbs and wires and all. Or so we thought.

The one thing we did not change back was the fuse. Silly – we simply forgot.

Happy to have working taillights again (the bug was fixed) I drove home. Almost. At least as far as the next police station, where a friendly officer motioned me to stop and explained that I could not drive with a car with only one headlight.

Today I see this happening with software all the time – people fixing just the problem at hand, without realizing what havoc they bring upon the rest of the system.

So please – whenever you “fix” a piece of software: Make sure that you can still ship the whole system.

My car was definitely not shippable at that time – make your software-craftsmanship better than the car-mechanic handywork of an untrained teenaged boy and keep your product in shipshape. In a state where it can be shipped to the real customer all the time.

till next time
  Michael Mahlberg

Sunday, December 13, 2015

The next big thing...

... probably won't save your project.

If you have seen the Gartner hype cycle model you are probably familiar with the terms “Peak of Inflated Expectation” and “Trough of Disillusionment”

Hype Cycle
Gartner Hype Cycle Model

While the hype cycle model itself is heavily debated and criticized, I for one see that a lot of “new things” follow this kind of adoption curve.

But why is that? And why is it so common around things as diverse as programming languages (e.g. Java), Infrastructure (e.g. Build Servers), Methods (e.g. eXtreme Programming) and techniques (e.g. test driven development, TDD) to name a few?

What works for Early Adopters doesn't have to work for everybody

One reason I have seen for this is the simple fact that tools in the hands of experts can become menaces in the hands of laymen. This is not only true for things like the beautiful Katana (a Japanese sword) a friend of mine inadvertently used to cut an artery in his arm. No worries: he got treated real soon and has no lasting injuries. And the tatami (floor mats) got cleaned as well by now.
But back to the question at hand – other tools are also often quite dangerous in the hands of the “non-expert.” While it is hard to injure yourself on the sharp edge of the Java programming language, it still is quite possible to use it in a harmful way – for example by over-using mechanisms like reflection, a powerful feature that makes it possible to change the program from within while it is running or change the accessibility of parts of the software.

In my experience this is what happens when the tool reaches the “Trough of disillusionment” – badly trained people start cutting themselves too often and the effectiveness of this tool for the masses becomes a subject of discussion.

In Iado (an art of moving the Japanese sword) there are several ways to avoid the self-injury of the novice swordsmen. They either use a Bokken – a wooden sword look-alike – or a Katana-like thing with a blunt edge, the so called Iaito.

This, to me, seems to be what happens, when these tools reach the “plateau of productivity” – they get blunted down to make it harder to injure oneself.

And of course it works. In a way. With their blunted down instruments, it is much easier to let more people with less training do jobs similar to the ones that the early adopters did. Those early adopters whose efficiency with these tools made the tools so compelling.

But there is a drawback – with a Katana you can easily slice through a steel plate (for example). You just can not do that with a Iaito.

So please – if you use the blunted down version of a tool – don't expect it to still work like the real thing.

Adjust your expectations and always take into account who is wielding the tool.

till next time
  Michael Mahlberg

Sunday, November 29, 2015

In Lean "Value to the Customer" actually trumps ... "Eliminate Waste"

In Lean "Value to the Customer" actually trumps ... "Eliminate Waste"

A lot of people starting with lean think that the topmost goal is to eliminate waste.
Maybe it is just because the Poppendiecks started their first book on Lean Software Development with the principle "eliminate waste", or maybe it is because "Eliminate Waste" makes such an impressive battle cry.
The so called lean decision filter (described nicely by David J Anderson in an article on providing value with lean) makes it rather clear, that in most cases waste elimination is just a ‘minor’ priority, easily trumped by Value to the customer.

The whole chain of trumps in that decision filter is listed as

Value to the customer
Waste elimination

This list makes it way easier to decide what to work on next, as Yuval Yeret explained in an article at the leankit blog.

So, next time you ask yourself what to do next you might consider applying this filter instead of going by simple "always do x" rules.

till next time
  Michael Mahlberg

Sunday, November 15, 2015

Breaking down the task-board

Have you ever experienced that dull task breakdown session, where you started with a couple of concepts (lets say street and phone and country) and ended up with a list of three or four tasks that somehow re-occured several times?

In this case perhaps:

  • modify database schema to accommodate street
  • modify ui to display street
  • modify validators to validate street
  • modify database schema to accommodate phone
  • modify ui to display phone
  • modify validators to validate phone
  • modify database schema to accommodate country
  • modify ui to display country
  • modify validators to validate country

In these cases it might help to think differently about your work – perhaps those repeating categories are in faction stations that each of the concepts to be implemented have to go through.

Some people would model this as three different stations

  • Database
  • UI
  • Validation

If you put them on the board as separate columns each item could be reduced to it's concepts core and just ‘flow’ through the system.

Database    UI    Validation done

So, perhaps it’s time to rethink task boards, put in some of the ideas from lean and kanban and add a couple of columns.

(of course not with the exact names from above ;-)

And no I don't recommend building db-, ui-, and logic-silos. But if you have them, be honest and acknowledge the fact. Change it afterwards. Start with what you do now as they say in Kanban.

till next time
  Michael Mahlberg

Sunday, November 01, 2015

How to plan for iteration 35 in iteration 1

"But how can we make sure that we can incorporate all those things that we will only discover well into the future if we don't design the system with an architecture that takes all those things into account?"


You can't. But neither can you (or anybody else for that matter) look into the future with certainty.

We all learn only from things in the past. Even if we think about the future and ‘learn’ from those thought, the process of thinking is in the past, when we learn.

We just can not predict the future.

But we can learn from the past.

We can learn to build systems in such a way, that they are comparatively easy to adapt to new situations and requirements.

That’s what patterns are for

Software Design Patterns are proven approaches to common problems that have worked in the past. That is how the GoF-Book, which made the whole pattern movement available to a broader audience, states the idea of patterns. You've got a problem you want to solve? Perhaps there already is a proven approach. Look it up. Chances are that the approach encompasses sound design principles and makes later changes attainable.

That's what refactoring is for

Software is more easily changed and extended if it is well structured. Refactoring, as it has been defined originally, ensures that the system stays well structured and easily modifiable. Use it. Follow ‘red – green – refactor’ if you have the chance or at least ‘refactor mercilessly’

That's what SOLID is for

The SOLID design principles (SRP, OCP, LSP, ISP, DIP) provide guidance on how to build actually change-friendly and robust software.

That's what Clean Code is for

Robert C. Martin also inspired the whole clean code movement which is not only about writing good code, but also provides insights on how to keep your whole development ecosystem in such a way that change stays easy.


You don't need to think about the details of iteration 35 in iteration 1 – if you ‘just’ factor in a small portion of the effort that up-front planning for iteration 35 would have taken for refactoring and restructuring in each iteration and follow sound software design principles you should be fine. After all iteration 35 may turn out to be something completely different from what you think now that it will be.

till next time
  Michael Mahlberg

Sunday, October 18, 2015

Add some PDCA-Spice to your “Inspect and Adapt”

My beef with “inspect and adapt”

According to legend agile software development almost would have been called “adaptive“ software development. And for a big part that is what agile approaches are about.
It is all about adapting the process, the tools, the interactions to the situation at hand. And most approaches out there have very well defined ways to handle such changes.
But – and it is a huge but – nowadays some people justify everything they would like to change by just invoking “inspect and adapt.” But in some cases (and those are the cases I'm so agitated about) it is only adapt. Or not. Actually sometimes it is only change. But without inspecting first, there is nothing to adapt towards!

What adaptation is, ...

... is for example put nicely by wikipedia:

Adaptation, in biology, is the process whereby a population becomes better suited to its habitat. (emphasis added)

It is not some random change. It is based upon circumstances. Actual facts and requirements. And if one aspires to adapt to something intentionally that just is not possible without knowing those circumstances. In other words: it is necessary to inspect the circumstances.

If you gotta do it, do it right

So yes, you should follow “inspect and adapt” – just don‘t call it a method or try to imply that every time you change some aspect of your process you‘re following the ideas of “inspect and adapt.” If you want to change the ways you work for the better, try employing some formal approaches.

Take some advice from the Lean and Kanban community and follow the so-called scientific method, which – [according to the university of Riverside, CA] – consists of the following steps

  1. Observe some aspect of the universe.
  2. Invent a tentative description, called a hypothesis, that is consistent with what you have observed.
  3. Use the hypothesis to make predictions.
  4. Test those predictions by experiments or further observations and modify the hypothesis in the light of your results.
  5. Repeat steps 3 and 4 until there are no discrepancies between theory and experiment and/or observation.

One of the approaches that has the thinking behind this ingrained in its DNA is the Deming cycle, often called the PDCA cycle.
The four steps P lan, D o, C heck and A ct actually implement the scientific method for the context of process improvement.

Some things about the PDCA-Cycle come up in conjunction with Toyota’s implementation that seem noteworthy:

  • Plan
    build a model of the expected changes to the process – a model also specifying the expected outcomes of the changes
  • Do
    try the proposed changes on a small scale, if necessary with extra effort (that can be accounted for in the check step)
  • Check
    Verify that the process changes deliver the expected outcomes
  • Act
    Roll out the changes to a broader audience, including the automation of labor-intensive tasks or roll back the changes and go to square one (e.g. Plan)

(The rinse-repeat of the whole cycle is somewhat implied by the name cycle, and thus the fifth step – start again with the next idea – is not made explicit)

One more thing... in fact, PDCA is not the Deming cycle – PDSA is

At least according to a guy called W. Edwards Deming this cycle is called the PDSA cycle – and he really should know it...

Be that how it may – the question remains: will you just randomly change things and justify those changes by citing “inspect and adapt” or will you inspect and adapt by following a scheme like PDCA and “install” or “back out” changes based upon feedback collected on some experiment that provided you with real data?

till next time
  Michael Mahlberg

Sunday, October 04, 2015

Scrum Master or Zen Master?

Another Scrum-Master Anti-Pattern

Well – of course there is not one Scrum-Master anti-pattern. There are dozens.

Let's look at a special one today: The Scrum-Master as the Ruler, the Projectmanager, the Sovereign of the project.

To cut it short – Scrum-Masters are not.
Originally weren't meant to be at least.
Maybe sometimes they actually are – but they should not.

It is about the verb ‘master’

Here is what the dictionary on my computer has to say about the verb master:

2 it took ages to master the technique:
learn, become proficient in, know inside out, know (frontward and) backwards; pick up, grasp, understand; informal get the hang of.

If you look at it that way, a scrum master becomes to scrum what a zen master is to zen: someone who has mastered scrum and now is able to help others – at their request - to become more proficient in what they do.

Of course he also has a lot of other duties as well, but the fundamental idea of ‘mastering the process framework‘ in contrast to being the 'Master of the team' is really a big difference in the attitude.

The Scrum Guide explicitly states that ‘The Scrum Master is a servant-leader for the Scrum Team’, perhaps this is worth taking into consideration from time to time.

till next time
  Michael Mahlberg

Sunday, September 20, 2015

Effective daily scrums – and how to achieve them

In this case my advice is contrary to what I usually say. This time my advice is not to go by the book!

Remember the purpose of the daily scrum

The purpose of the daily scrum is to plan ahead for the next 24 hours. It is not about reporting progress. Of course it is an important part of the planning to know the current situation, but the focus should be on the things you are about to do.

"What did I do in the last 24 hours", "What will I do in the next 24 hours", "What hinders me in achieving the sprint goal?" – those (paraphrased) questions from the scrum guide, are there to facilitate a purpose as my esteemed friend and colleague Tom Breur points out. The purpose being constantly navigating the best way to achieve the business goal for the day.

It is like action planning for firemen

The daily scrum is more like firemen planning on how to tackle a burning house they are about to enter.

They don't talk about how they clung to the seats when the fire truck skewed around the corners – it is all about the future.
"I'll go to the left with hose", "I'll break the door open, so that someone can use the powder extinguisher" "As soon as you've broken up the door, I'll use the powder extinguisher to quench the flames in the hallway" etc.

Nobody want's to know how they quenched the last fire – it's the future that counts here. (I really love this example, courtesy of another esteemed friend and colleague, David)

Turn it around

The least you can do, to make the daily scrum more efficient and effective is to turn the questions around. Start with the future, mention any impediments you actually have and then talk a little bit about what you did. If there is anything left that the others don't know from the last daily meeting or the events in between.

It is not for 24 hours!

And what's more: You don't have to talk about your plans for the next 24 hours – all you have to do is to talk about your working time, and in most places this would amount to roughly 8 hours.

Talk about the future

If you have to tell your colleagues what you did in the last 24... sorry, 8 working ours, you probably didn't tell them what you planned to do in the last daily meeting. Or you had to diverge from your plans. If you had to diverge I very much hope you didn't wait till the deadline. After all the daily meeting is a deadline for the things you planned at the last daily meeting. You told your colleagues what you had planned to do between then and the following daily scrum. Usually it is a good idea to let everybody on the team know what has changed just as soon as you find out. It just might happen to interfere with the work of your colleagues as well. After all you would like to know if a co-worker changes his mind if your work depends on their result. Even – or especially – if they have a good reason for changing their plans.

So if you talk about the last 24 8 working hours you'll only end up telling people what they already know. Either from the last daily scrum or from observations and information during the last 8 working hours.

Instead, think of the firemen planning to extinguish the fire in a burning house – or the A-Team planning a mission.

till next time
  Michael Mahlberg

Sunday, September 06, 2015

Iterations! “Done” does not mean "never to be changed again!"

One of the hardest things to understand in agile software development seems to be the concept of “iterative“ development.

Especially in conjunction with the concepts of done-done and definition of done.

In my experience people – developers and customers alike – tend to be confused by the opposing forces of “embracing change” and getting the current iteration of the product to a “done” state in “production quality.”

Although there are several very good explanations on the web, like the one from Alistair Cockburn, it seems very hard to express the basic concepts concisely.

The default example nowadays seems to circle around the different ways to paint the Mona Lisa, so let's start with that one:

  • Incremental:
    • (could be similar to waterfall with milestones)
      • partition the canvas into an arbitrary number of rectangles
      • Start painting at the top left rectangle
    • This approach has the added benefit that you could dispatch a subteam to paint the bottom right rectangle and thus ‘scale’ the project. 1
  • Iterative:
    • (closer to the standard approach in agile projects)
      • Do a rough sketch of the whole picture in pencil
      • Get customer feedback on the sketch
      • Erase some lines and areas
      • Get customer feedback on the sketch
      • Do broad fills with colors
      • Get customer feedback on the intermediate
    • In this approach you might even have one iteration where the Mona Lisa was blonde

While this model does a fine job in explaining the difference between iterative and incremental, it does not really address the topics of ‘done’ or shipping.

Partially that is due to the fundamental difference between Software and pictures – it is easy to keep on working on software after shipping, because we ship something that is mechanically created from the stuff we write. (No matter whether it is compiled and linked, compiled and put in a jar, packaged or anything else – we still don't have to let go of the stuff we created to ship to customers). There is a huge difference between paintings and software, so for me this is where the analogy stops.

In painting we get value from the customer feedback. In software products we oftentimes can get much more value from real-world use of the product we’re building. Or to be exact: from using the current state of the product that we are building with real customers.

Now to be able to do this, we have to do both – we have to build an increment of our product that is fully functional. And we have to iterate over parts that we already had ‘done’ the last time around.

Let’s look at a simple example – a sign up dialog.

We might want to offer some high-tech login integration via facebook, google and twitter later on, but since we’re just a tiny startup and haven’t yet gotten around to deciphering all the fine details of the OAuth2.0 protocol, for the time being we only want to enable registration via e-mail for the first launch.

So our first iteration of the login dialog only includes e-mail. But what about the design? Since we already know that we also want Google and others integrated, we could already do a real nice UI-design for all the options, couldn't we?
Well – we could. But we shouldn't. We would have to deal with a whole bunch of decisions about things that we only have vague ideas about. And would have to maintain those decisions for quite a while. One agile way of addressing this in an iterative, incremental way, would be to build an increment that is of production quality and make sure that it is easy to re-iterate over this part easily by employing techniques from good software craftsmanship.

So the first increment would contain a finished part of the product (with all the colors, to stick with the Mona Lisa metaphor) to make it really usable, but we would be willing and take precautions to change it later on (paint it over) to iteratively add new functionality.

So long for now
  Michael Mahlberg

Footnote: Scaling and iterations
Actually this also shows that this kind of work-breakdown and scaling is manageable if the expected result to be is known beforehand. We can now easily(?) put nine illustrators on nine pieces of a gigantic billboard that is to feature the Mona Lisa as long as each illustrator knows which part they are responsible for. But that would not have worked for creating (developing) the Mona Lisa in the first place.

Sunday, August 23, 2015

What's in the Potentially Shippable Product Increment (PSPI)?

There are many references to the frequent release of running software in the agile universe – of which the agile manifesto certainly gets cited the most.

One term has become very prominent - the “potentially shippable product increment” from the agile sub-universe of scrum. [In this case from LeSS - the Large Scale Scrum Framework. ’Of course’ the scrum guide itself does not use this term. The scrum guide calls it “Increments of potentially shippable functionality”]

But since the notion of the potentially shippable product increments captures so much of the practices of good agile software craftsmanship I really fell for the term and use it more often than the official term from the scrum guide.

And it is a very handy way of illustrating what is behind the ideas of Integration, Shippable, Iteratively, Product(ion quality), w


“All [created artifacts] have to be integrated” – the stuff really has to work together (as I outlined in more detail in what exactly is continuous integration).


Something you could give the customer to play with on their own. An installer-file, an executable, a .ear-file with deployment description, a docker image etc. Something they could really “take away” with them.


We know we are going to change things - let’s build them in such a way, that it is (and stays) easy to do that. Even though we deliver increments of the product we do so iteratively – we do not try to paint a picture like the Mona Lisa from the top left to bottom right. We do it with sketches and refinements and edits until we are finally satisfied with the result.

Production quality

Production quality is perhaps the hardest one to explain without painting lots of pictures. One of the best explanations I've come across is that it "also has to work when the customer can use it without the developer being around" – which implies many little things.

Those things could include:

  • a user friendly way to interact with test-data if there is something like that necessary for the relevant increment
  • no buttons that don’t do anything
  • no spinning wheels that are not connected to any of the inner workings
  • etc.

Of course this is a bad definition, because it (almost) only defines a negative-list. It only says what production quality is not. I’m still working on a positive-list, and hope to release that soon.
One thing for that positive-list that springs to mind immediately is the requirement that the code adheres to the SOLID principles – so that it only has to be changed if a future requirement has an impact on the implemented capabilities – not because something has been left over “to be fixed when we work on this in the next iteration.”

How do you define your “Increments of potentially shippable functionality?”

till next time
  Michael Mahlberg

Sunday, August 09, 2015

What exactly is continuous integration?

A while ago I wrote about the fact, that a build server can't give you continuous integration, now let's look at what we might want to accomplish with continuous integration in an agile project.

Three stories and a potentially shippable product increment

Let us look at an example from a scrum-inspired project. Suppose we have three backlog items (of course not necessarily stories in the Cohn format...):

  • our product can do ‘X’
  • our product can be installed on Windows
  • our product can be installed on Android

The ‘old’ way

In an old-fashioned project, using ‘late integration’ those requirement could easily be met by having three separate results

  • a web-application where the ‘x’ can be demonstrated
  • an install-package for Windows, where some sample app, based on roughly the same technology stack as the ’product to be’, is installed
  • an install-package for Android, where some sample app, based on roughly the same technology stack as the ’product to be’, is installed

These different artifacts could even be put in the same source-code repository and built by some type of build server, but they would not have to interact with each other, because that could be part of a later integration phase.

The problem in this scenario is that there is no mandatory requirement for a functional integration between these tasks.

So you could end up with an installer on Windows that just displays some “this would be your application” boilerplate text, an install package for Android that contains a different launch mechanism than the windows version and a web application that can do ‘x’ but is included in neither.

Even if the automated tests related to each backlog item would be ‘green’ on the build server, the customer would still end up with three separate “features” that might not generate any value for them.

The agile way

Building this with a continuous integration mindset would be different. Following the old ideas of continuous integration in conjunction with the mindset of a potentially shippable product increment yields quite different results. Especially since such an increment contains all the existing functionality as usable and working software.

In this scenario we would focus on integrating the functionality and make sure that every story we complete interacts with all the other stories.

And all off a sudden, the customer would have real value – whether the artifacts are built via a build server or not. As long as the team creates a thing that really could be shipped every time it integrates a task – in the functional sense – and makes it easily accessible to the customer, the customer can just take that artifact(s) and interact with all of the currently completed functions.

No ci-server required, btw.

’till next time
  Michael Mahlberg

Sunday, July 26, 2015

Sprints – Give me a break

An iteration in Scrum is called a sprint ...
Scrum is at least six years older than the agile manifesto ... The Agile manifesto’s principle 8states that “[everybody] should be able to maintain a constant pace indefinitely.”...

How does this compute?

Sprints are related to Scrum

First of all let's re-iterate a well-known fact that often gets overlooked anyway: Scrum is not agile. WHAT? Of course Scrum is an agile Method! But that relationship has a direction. Speaking in terms of set theory what I am getting at is the fact that Scrum is just a Subset of "Agile." Looking at the history of Agile, Scrum is just one of the contributing methodologies that were instrumental in defining "Agile" (via the Agile Manifesto). And in Scrum – if you have read the old book by Schwaber and Beedle – the Idea of the Sprint was "to be able to work un-interrupted towards an agreed upon goal" (paraphrased). With these bounding conditions it makes sense to talk about a sprint, where you look neither left nor right, but focus only on the problem at hand. As long as you are sure that you are able to deliver a “potentially shippable product increment” at the end of the sprint (and not some loosely related artifacts that might be welded together in an upcoming future sprint).
But the metaphor only goes so far – the way I understood it from Ken back in 2005 and try to live it these days – sprinting here mostly refers to the focus on the sprint goal, not to the total exhaustion one would deem acceptable for winning the gold medal at a 100m sprint at the Olympic Games.

Sustainable pace is related to XP

The principle of “sustainable pace” comes – almost verbatim – from eXtreme Programming's definition of this XP "Rule" of set(ting) a sustainable pace. Here the focus actually was on not exhausting ones resources. And no, a resource in this context is not a human being, but a persons mental and physical energy reserves and such – just in the same way a marathon runner uses the phrase “at 35k I just had no resources left.”

All models are wrong - but some are useful!

"Sprint" is a Token, a word that identifies an artifact in the Scrum framework. It happens to be a metaphor as well. Metaphors only translate into different domains so far. As pointed out by the aforementioned quote some models are useful. And to me the Sprint-model is useful with regard to the focus on the next goal, not with regard to the total exhaustion of ones resources.

Your mileage may vary of course, but I think that decomposing the different aspects of the metaphor helps a lot in deciding how to live those aspects in your concrete situation.

So long,

Sunday, July 12, 2015

Scrum-But or Inspect and Adapt?

For a couple of years now I have heard people condemning projects as “Scrum-But”, mostly without even looking at the projects. And sometimes even without any reference to what they consider to be a scrum-but.

“but” what?

As you probably can imagine after my last two posts, I am especially suspicious if people do that without looking into the scrum guide. I keep experiencing people proclaiming something is a scrum-but because of all kinds of misconceptions. Sometimes “they don't have 2 week iterations” (the scrum guide calls for “a time-box of one month or less”) or because “they don't use Unit-Tests” (the scrum-guide states that “Each Increment is additive to all prior Increments and thoroughly tested, ensuring that all Increments work together.”)

While it might be preferable to have shorter timeboxes (as the agile manifesto clearly states: “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.” [emphasis added] ) and that it is much easier to have a thoroughly tested product if one does have automated tests including unit tests, this is more about common sense (or agile experience) – but definitely not part of scrum.

Apart from the fact that more often than not, the same people carry the battle cry of the agile credo inspect and adapt as a default answer to most everything, there is a certain irony in the fact that nowadays we seem to have a kind of method police around scrum while the first value pair of the preamble to the agile manifesto reads “Individuals and interactions over processes and tools.”

If you do it, do it right

Oh, and don’t get me wrong. If you want to call it scrum: go by the book. But by the right one!
And remember that “Scrum is a framework for developing and sustaining complex products.“). And while on a programming level the “Instantiation of such a [software] framework consists of composing and subclassing the existing classes.”) the fact of the matter is that the same is true for a process framework – you have to define the specifics for your environment to be able to use it!

So – how do you shape your process?

till next time
  Michael Mahlberg

Sunday, June 28, 2015

The official definition of planning poker in Scrum...

... doesn’t exist!

Well okay – if you read my last piece this was no surprise after the title, I guess...

There are definitions of planning poker like the ones on wikipedia or the popular description from mountain goat – But you won‘t find planning poker in the original Scrum resources. Neither in the book, nor in the paper, and not the Scrum guide.

You also might want to reconsider if you really need planning poker for your estimation. There is a lot of thought about not needing estimation at all (not freeing one from the need for analysis, though!) and there are other options like magic estimation (sometimes also called affinity estimation)

till next time
  Michael Mahlberg

Sunday, June 14, 2015

What’s the official definition for story-types in Scrum?

the quick answer is:

There Is No Such Thing!

Have a look at the official scrum guide.
Have a look at the original (IMHO) Scrum book by Schwaber and Beedle.
Have a look at the OOPSLA'95 article that introduced a broader audience to Scrum.

None of them mention stories!

Stories are mentioned in the definition of XP – on the original XP-Wiki as well as in the 1st edition of the defining book.

There is a lot more to know about stories of course – Mike Cohn wrote a Book on writing user stories and defined one of the best known formats for writing stories (although there are many other opinions and additions)and Jeff Patton popularized the idea of story-mapping in a very approachable way.

But those are not things that Scrum defines, requires or in any other way endorses. Those are all additions by the community that might fit your case. Or not. Give it a try. Evaluate cautiously. Pick what works for you.

till next time
  Michael Mahlberg

Sunday, May 31, 2015

Whose job? On self-organized teams and responsibility

Basically this boils down to the statement that in my opinion there is a big difference between everybody being responsible for everything and everybody being responsible for the whole thing – and only the latter works.

The old story of four people

I once read a story about some people called Somebody, Anybody, Everybody and Nobody.

They where supposed to complete some Job that had to be completed somehow – let's not worry about details here.

It was a Job Anybody could have done – but Everybody thought Somebody would do while in the end Nobody did it.

While this is a cute little play on words, it –unfortunately– is also a phenomenon observable in peoples behavior. In his book „The Tipping Point“ Malcolm Gladwell quotes an experiment where a person in distress called out for help in two different environments. A densely populated area and a sparsely populated area.
The alarming –but understandable– result was that in the densely populated area everybody thought that somebody else would do something and so nothing happened for quite a while. In the sparsely populated are the reaction was quite different. This phenomenon has been dubbed the “bystander effect” a long time ago.

Who is responsible in a team?

Recently this question has been brought up in the context of Scrum teams, but it really applies to all kinds of teams.

You are not responsible for everything

The notion, that everybody on the team should be able to do everything and whenever a problem arises every team member should be able to fix it is not only unrealistic, the whole idea is counterproductive to the overall performance of the team. A team –as described for example in an article by R. K. Grigsby– is a group of people with complementary skills [who are mutually accountable and share a common goal].
Now, can you imagine a soccer team with all players being equally well suited for all positions? How high is the probability that such a team would have the worlds best goalie? Or the worlds best offensive? So clearly there have to be some areas of specialization –while still maintaining some skills in all the other areas– if you want to have the best team possible. But when the skills are not evenly distributed, neither can the responsibility.

You are responsible for the whole thing

But every team-member should be responsible for the whole. That is quite a difference. While I might not be able to perform a database change that is necessary for my new code I can still try a number of options to make sure that the system stays in good shape. I might track down a team member with the prerequisite knowledge. I might hold back my changes until I find some database genius on the team to pair with. I might find another design. If all else fails I might try to stop the line.
But I don't just do my part and move on and rely on the team to fix it because "the team is responsible to fix whatever goes wrong."

Let's not confuse self-organization with anarchy – self-organized on a team level means that the organization comes from within the team. Not from the outside. But this does not mean that everybody just does what they feel like. If the agreement is that Scott has the final say on database decisions, and whenever there is a decision about cryptography either Alice or Bob have to agree with it, then that may be the choice of the team, but it still is an organization that applies. And if anybody strays from those agreements –without negotiating them new– this betrays the mutual accountability within the team.

Therefore, in teams everybody is responsible. Yes. But for the whole, according to their specific capabilities. And it is a question of team organization how the team members act on this shared responsibility.

And remember "responsibility can never be assigned, it can only be assumed".

till next time
  Michael Mahlberg

Sunday, May 17, 2015

Boards: Paper vs. Digital – when, how and why

There are all kinds of boards around – Scrum Boards, Task Boards, Personal Kanban Boards, Story Boards, Portfolio Boards, Kanban Boards etc. – and all of them can come in different flavors. Two of the most distinctive flavors are “physical“ and “digital” and the difference is huge!

Even though the names differ, all the boards are in some way descendants of the kanban boards used in manufacturing, notably at Toyota.

(Of course there is a huge difference between kanban systems in manufacturing and virtual Kanban systems, therefor in Kanban the kanban is not the kanban, but that is another story)

Why use a board at all?

The first rule of Kanban is not ”You do not talk about it”.
Instead it is “Make it visible” or, to quote the correct words “Visualize the workflow” – and that is what a board is good at. In my early days with agile we called all kinds of boards “information radiators.” That term also referred to burndown charts, and burnup charts, and earned value visualizations, and defect maps, etc. – whatever was helpful, but never all of them at the same time.
The point is: information should be radiated. Made visible to everyone. And it should be the relevant information at the point where it is relevant.


Okay, so that is one part of it. But is there anything else, besides visibility?
Yes there is – such boards can also be a great way to communicate!
When they are located at places where everyone on the team notices when somebody else updates the board a lot of information is conveyed implicitly. Not to mention that the boards can serve as a culmination point for standup meetings and as a tangible way to discuss the current state of flow.

Create bite size chunks

Even if there are no WIP-Limits on the board, as long as they are physical there also is a physical limit on what can be put on the board. Just realizing – for example – that you can't put any more cards in “quality assurance” without calling some brick-layers to remodel the rooms can persuade teams to shift their work-focus.

Model your process

In Kanban for knowledge workers (the one with the capital K) the board also is a physical representation of the actual process – including process policies like WIP-Limits or cadences and such.

Electronic boards make change harder

No, of course not – at least as long as you're the administrator of that electronic board. And you don't have too many reports that rely on the boards layout. And nobody shares basic definitions like stations (a.k.a. columns or status-values) or classes of service. And you have a way to remove stations that are not empty. And you can easily inform everybody about the new process policies that go with the new board layout.
As long as all of these preconditions are met, electronic boards don’t make change harder. But as soon as you introduce central administration, create elaborate dependencies, share basic assets etc. change becomes a lot harder.
After all agile software development was meant to be adaptable and one of the most important parts is that there are retrospectives (or operations-reviews). But what good are those retrospectives if it is not possible to easily (!) adjust the process accordingly?
Where is the empowerment of the team if some tool-administrator hast to edit status values, so that the so-called self-organizing team can get their new buffer column (or something like that) on the board?

Electronic boards reduce visibility

There used to be a saying "DOORS - where requirements go to die!". Lately DOORS in this quip has been replaced by the names of more modern tools, but still there is a bit of truth in that saying.
Requirements that are stacking up in a tool (usually) don't make you feel uncomfortable. Unlike walls, tools have almost no limitations and the difference between, lets say, 350 and 850 un-reviewed requirements is quite easy to miss.

It’s about learning!

One of the great effects that can be experienced by going through the pain of really modeling the actual work we do is that we learn a lot about our processes. And adjusting the visualization over and over again is what reflects this learning. We might start with a five station (column) board and end up with a five station board. But if our board really reflects the learning it will probably have experienced times with a lot more columns in between.

(Try to) always start physical

And since everyone can break the rules on a physical board (hang a card sideways to indicate an improvised class of service, write a new rule, suggest a new cadence etc.) which is immediately visible to everybody who comes along to interact with the board, physical boards facilitate the willingness to experiment. And there is a much smaller risk to break anything (which is just not the same with electronic boards) which also contributes to a more relaxed stance to experimentation.

So – there are some good reasons to use an electronic board and even if you do use a physical board you still have to find a way to process the data electronically. But the power of a physical board, especially due to its limitations should never be underestimated.

Until next time
  Michael Mahlberg

Sunday, May 03, 2015

Why “the iron triangle” (of project management) isn’t

Over and over, people quote the iron triangle of project management - relating verbatim to the elements time, scope and cost from the wikipedia article or by the slightly less formal adage “Cheap, Fast, Good – pick any two”.

Surprise: It is not a triangle

Anyone who looks closely at the concept – or just reads the first paragraph on wikipedia – quickly realizes that the “triangle” has at least a fourth side: quality!
(But the term “devil’s quadrangle” has not yet found it‘s way into wikipedia)

And as handy as the tool “iron triangle” may seem in arguments, it really should be used with caution. Especially arguing about the quality constraint is very common in my experience. By relating to the “pick any two” adage people try to argue that “with the new time constraints we have to compromise on quality.“ And apart from the fact that “quick and dirty is very un-agile” this approach completely ignores the fact that compromising on quality usually does not get the job done more quickly but generates severe issues for the ensuing product.
The agile answer to this conundrum is to negotiate on scope instead of quality. That is what most sane people would do with tangible objects as well. I might go with a motorcycle (smaller scope) when developing a car isn’t feasible under existing time and budget constraints. But I definitely would not go with a car where “somewhere between 20 and 40 percent of the nuts and bolts are not tightened correctly” (less quality).

To me it seems much more rewarding to manage scope than to try to compromise on quality.

till next time
  Michael Mahlberg

Sunday, February 22, 2015

Triage may seem cruel, but it could save your product

Triage is a term from the battlefield. To be more specific from the battlefield mobile hospitals.

So triage should not have a place in the – comparatively peaceful – world of software development. Or should it?

Sometimes you can’t get them all

It’s all about the question of how to make the best use of your resources in times when the demand exceeds the resources.

The original idea behind triage was to categorize the wounded into three categories (as described more closely in the wikipedia article on triage)

  • Those who are likely to live, regardless of what care they receive
  • Those who are likely to die, regardless of what care they receive
  • Those who would die if they did not receive immediate care but are likely to live if the do get immediate care

and then concentrate your resources on the category where your effort really makes a difference. In the original case this means to start with the third category – cruel though it may seem for the other two categories.

Over the course of time the rationale and theory behind triage have evolved immensely, as can be seen in the article. Both the ethics and the practical application have been refined, but the basic idea is stimm the same. Don’t waste your energy on “lost causes” when that would lead to loosing causes you have a chance of rescuing.

What’s that got to do with software development?

In our day to day work we’re often confronted with situations where the requirements exceed our capacity.

If we try to do everything, even if we dynamically reschedule according to priority, some things won’t get done. That is exactly what “requirements exceed capacity” means. And this is exactly where triage fit’s into the model. If only 4 of 11 requirements will “survive” make sure not to waste any effort on those, that will never see the light of day anyway. Put them in a separate “folder” and once you’ve got discretionary time on your hands revisit that folder and check which of the requirements would still provide value.

If we concentrate on the 4 most important requirements first – the amount we can expect to complete – we end up wit only 36% (4/11) of the requirements fulfilled, but the features for those four requirements can be shipped.

But it is never a good idea to try to fulfill all 11 requirements with only enough resources for 4 – we would end up with 11 requirements fulfilled to 36%. And because a feature that is only 36% complete in most cases is not shippable we would end up with zero deliverable features.

No, it does not mean you can skip tests or refactoring

So if we have to cut functionality can’t we go faster by skipping unit test, documentation or refactoring? That would be just silly – just as if the doctors in the battlefield hospital would omit washing their hands as Uncle Bob would probably remark. On the contrary. Applying triage on a requirement level should give you the time and space to work at your optimum on those requirements that can survive.

till next time
  Michael Mahlberg

Tuesday, February 10, 2015

Yes, you do need a “Dashboard” (a.k.a. Andon - Board)

And no, I don‘t like the terms “Project Dashboard”, “Lean Dashboard” or “Agile Dashboard”. I do like the concept of the andon-boards

But what is a(n agile) dashboard, and why do you need it?

Speed is nothing without control

Most lean and agile approaches include some kind of feedback mechanism to enable informed decisions. In Scrum for example some information is fed back into the loop in the sprint planning as the capacity for the next sprint. In flow based approaches the feedback is often build into the work organization. When a kanban approach to process control is in place it can at least be found in the capacity of the stations (a.k.a. WIP-Limits per column).

This may be enough for a while, but it is not enough for the long run.

How often is an autopilot ‘on course’?

Actually not much at all - if it would be possible to set the course once and then ‘just let go’ you wouldn’t need an autopilot. The main reason to have an autopilot is to counteract the little deviations off the course caused by internal or external disturbances. So the autopilot constantly correct to the target, but the target is only reached for very short periods of time.

You’ve got to know that you’re off course to make adjustments

To be able to auto-correct your course, you have to know wether you’re on or off course. And that is what a dashboard can tell you. By making the actual ‘course’ visible as early as possible. On a dashboard. That is updated as soon as the information is available. And this is where an automated dashboard can come in handy.

What should go on the dashboard?

Whatever you’re aiming for!

  • You want to spend a certain amount of your capacity on a specific project? Then the actually spent time per project has to go on the dashboard.
  • You want to increase the test coverage? Then the current test coverage has to be displayed.
  • You want to spend at least x% of your time on improvements? Then the time spent on diferent types of work has to go on the dash.
  • etc.

So, how do you navigate your project?

’till next time
  Michael Mahlberg

Monday, January 26, 2015

There are some situations where “agile” is the default mode...

Of course there are a lot of situations and places where agile approaches have become the default mode, and looking back at the history of software development with regard to iteratively-incemental approaches that is only a re-discovery anyway.

But in the big – a.k.a. Enterprise-Level – companies it’s mostly not the case. Or only in name but not in action.

Sometimes even “The Enterprise” goes into “agile”-mode

There is a situation when even Enterprise-Level software projects switch to an agile mindset. At least in most things but the name.

All of a sudden certain things start to happen:

  • Business people re-prioritize on a regular basis (in short intervals even)
  • Business people take the time to describe and verify the requirements
  • Rollouts are allowed with very little overhead (sometomes called ‘hot-fixes’ in this context)
  • Developers ask the people from whom the requirements actually came, what they want now
  • etc.


The situation I’m referring to is especially common for in-house-software that is developed for (and often in) large comapnies. Not all – only those projects that come into a crisis-mode phase at the end of the development phase.

I mean the time between the official end of the project and the time the software is actually in use. After the project has been declared finished, but before it is adopted for company wide use. When the last glitches are eliminated – which often takes up way more time than ‘planned’.
This crisis-mode is actually when everybody switches to things that work. And surprisingly these things are an interesting subset of what the agile manifesto mandates. (Of course I’m referring to the second page here)

Unfortunately some practices are not so important in crisis mode. Things like “sustainable pace” and “regular reflections” sometimes don’t seem so important in crisis-mode, but even the “continuous attention to technical excellence” is often more prevalent in crises-mode than in the day-to-day business during the run-time of the project.

So, here‘s an idea: If your enterprise shows the same crisis-mode behaviours, why not use this as a wedge to introduce more agile approaches?

’till next time
  Michael Mahlberg

Monday, January 12, 2015

Expensive Architecture Is No Architecture

Some musings on the value of architecture...
(This post is originally from 2007 but never has been published IIRC – and it's current as if i had written it today)
  • Developers sometimes claim that “[this] architecture is to expensive
  • Recently several architects claimed that "[the] architecture has to be enforced because otherwise no one would pay the extra cost"
  • It seems, that even some architects think that architecture leads to more cost than benefits
  • From my point of view this attitude completely misses the point
  • IMHO the "form follows function" mantra attributed to the Bauhaus is true for information systems as well
  • An architecture that is motivated by the "right" goals it will pull more than its weight
  • Architectural principles always should be a means to an end – and that end should b made explicit. They are meant to address real problems in systems development.
  • The statement "It may be more expensive but it is what architecture demands" is a clear indication that either the scope of the observation is too narrow or – much more probable – the architecture is not an architecture but a bunch of rules far away from the real problems.
IMHO [as of 2015] the whole point of these musings is: If there is friction between development and architecture and the reason for this friction is that the architecture seems to be too expensive than that is an excellent opportunity to either re-evaluate the architecture (with real numbers and amounts and explicit assumptions) or to communicate the reasoning behind the architectural guidelines better.

And of course this is no one way road from architects to developers – it also sometimes pays off handsomely for development to make the incurred cost of architectural guidelines visible (of course also using real numbers and amounts and explicit assumptions)

So, how is your architecture? Does it pull it's weight?

’till next time
  Michael Mahlberg