Sunday, August 11, 2024

Do we need to get rid of the “Magic –or Iron– Triangle”?

Fast, good and cheap – choose any two. – internet proverb

Or, according to another source > SPEED, QUALITY, PRICE – Pick any Two > – James M. Wallace / Paul Dickson 1980

The Magic of the Triangle

Google search results for Magic Triangle on 2024-04-24 - two out of six show the project management triangle, two show the mathematical construct and two who other concepts

As a quick search on Wikipedia will point out a somewhat more appropriate name seems to be “project management triangle”, but still, with all the ambiguity of the term “magic triangle”, many people equate it with the aforementioned unholy trinity of speed, quality and price and a quick Google search showed that at least in September 2024 they were not alone.

Unfortunately the project management triangle is a model that holds true only under very specific constraints.

Looking at some sources like for example the 2012 IEEE paper that analyses original sources back to 1987 we find some interesting points. First of all, it is important to note that according to theses sources the project management triangle consists of the edges Cost, Scope, and Time while Quality is the area of the triangle.

If we take this literary than we could actually improve the quality by increasing cost, scope and time. (e.g. let’s double time, scope and cost and we quadruple the quality). Obviously it’s not that simple.

Furthermore, unlike in geometry, in project management cost, scope, time, and quality are measured in different units and can not easily be converted into each other.

Especially the simplicity of the relationship between cost and quality has been questioned quite publicly by people like W. Edwards Deming with his concept of “Total Quality Management” or even more poignant by Philip B. Crosby with the catchy book title “Quality Is Free (1980)”

Which quality is negotiable?

We see the tradeoff between price and quality all the time in everyday life. * Is it better to buy the cheap drill bit that will wear off after two holes or to buy the expensive one that will last for hundreds of holes? * Is it better to buy the cheap wine or the expensive one? * Is it better to buy a cheap chair that can be afforded today but will only last a year or an expensive one that will last for many years, but can not be afforded sooner than year?

If we look at these everyday questions, most of us humans start to think in different categories of qualities. For example durability, cost effectiveness, utility and so on.

Once you start to differentiate which category of quality you’re talking about many conversations about “cost vs. quality” become much clearer. (Thinking in project management triangle terms this would make the specific “quality” one of the parts of the scope axis)

Product quality vs. production quality

The last point I would like to stress in this article is the differentiation between the quality of the product (you can make a cheap tool out of cheaper materials) and the quality of the production (you can work with unfitting tools yourself or deprive your employees of good education.)

The important thing here is that while * product quality –or lack thereof– directly affects the customers experience, * production quality directly affects your own bottom line – the worse your production quality is the more it hurts your bottom line.

Like in the drill bit example - when you ship cheap drill bits, the customer will need to replace them earlier. But if you use cheap drill bits on your own manufacturing line you will have to replace them more often, resulting in additional work for you and downtime with accompanied lost revenue.

And if you think that is only true for manufacturing, think of well written, easily understood software vs. the nightmare we see all to often where the complex, unstructured and undocumented code base leads to a simple change taking weeks to implement. Wich in turn looses a couple of potential buyers or creates effort for manual workarounds.

Therefore, when it comes to production quality the “cheap vs. good” adage actually can be quite harmful. In most cases lowering production quality will make the total cost go up.

’till next time Michael Mahlberg

Wednesday, January 10, 2024

A minimum setup for planning?

“Without planning we would fall right back to being hunters and gatherers again”

-- someone on the internet

The gist of it (aka TL;DR):

  • Separate sizing and planning – different circumstances lead to different times to completion, even for equal sized work items,
  • replace estimation by analysis and forecasting,
  • apply confidence intervals according to your familiarity with the problem class. And
  • communicate the whole probability distribution whenever possible.

A minimum setup for planning?

Weibull Distribution as illustrated at wikipedia, file license CC 4.0 SA

While I’m a big fan of many of the “no estimates” ideas and the whole “beyond budgeting” movement, I think we still need to make planning better for those instances where we can’t avoid it.

When we talk to people outside of product development and project work, estimation and planning are very normal. Questions like “What do we plan for dinner?” or “What are the plans for the party?” show that planning is a normal thing for many, if not most, people.

There are many situations where we need to do planning and “estimation” outside of project and product work. After all, it was planning and estimation that enabled us to become settlers instead of hunters and gatherers several milenia ago.

The important thing is to un-mix the two concepts. And maybe also extract a third component called “co-ordination.”

Planning and estimation in the physical world

When we ask someone “When will it be done?” we implicitly ask a number of questions, with different contexts, like: * With regard to the understanding of the problem? (to gauge it’s actual size) * How much work has to be done for this? (How many square meters of wall have to be painted? How much scaffolding has to be built for this? Etc) * How much experience do you have with this kind of work? (Have you ever held a paintbrush? Have you used this kind of paint on this kind of surface before?) * With regard to the understanding of the problem? (this time, to gauge it’s feasibility) * What other problems have to be solved, first? (Do I have to move my furniture into a U-Haul for the time of the renovation?) * Can these problems be solved directly, or do they, in turn, have any prerequisites? (Do I need to have a drivers license to rent a U-Haul?) * With regard to the execution of the solution * How many people will do the work? Can the work be parallelized? * Who will actually perform which work? How experienced are they? (How long does it take them to paint one square meter, how long does it take them to build one running meter of scaffolding?) * With regard to external aspects * What other work has to be finished at certain times? (e.g. outdoor work during dry weather that has to be done first) * What other activities depend on this task? (How long will I have to live in the motel and leave my stuff in a U-Haul before I can move back in?)

Planning and estimation in the project world

When we go through these questions, we can infer some activities for planning in the product / project world (and in the world of so-called agile software development) as well.

Separating size and certainty

When we separate the question of the amount of wall surface to be painted from the question of experience in painting walls, we also separate the question of “size” and the question of “certainty.”

Being aware of the (un-)certainty enables us to better asses how much accuracy will be in our plans and adjust for it accordingly.

Using Liz Keogh’s model of complexity for example, we could agree that for all activities categorized as 3 (Someone in the world did this, but not in our organization (and probably at a competitor)) we will assume that it might take anything between half as much effort as we think now or twice as much.

For an item categorized as a 1 (We all know how to do this [and did it several times before]) that range might –just as an example– be from 0.8 to 1.2.

Of course, you have to finde the right factors for uncertainty for each level on the scale according to your local situation. ## Separating size and effort As even Mike Cohn wrote: estimation is about effort and knowing the “size” of an item does not automatically give us the effort necessary to complete that item. (Using white-out to “paint” the wall will result in a different amount of work than using a spray gun)

Separating effort and duration

Here’s the thing about ‘estimation’ and duration: even if we knew the effort that is required, we still can’t know the time it will take from start to finish.

Let’s look at a different analogy here, specifically with regards to relative estimation.

If you ask two people -one who drives a sports car and another one who drives a truck– how long it would take them to drive from Munich to Berlin (about 600 km) relative to the time it takes them to drive from Munich to Karlsruhe (about 300 km) they would probably both answer the same: “Twice as long.”

In this example relative sizing enables people with different backgrounds to still have meaningful conversations about effort, without even needing to know the absolute estimation of one another.

Still, when executing, the person driving the sports car would probably deliver a package to Berlin in less time than it would take the one driving the truck to get to Karlsruhe. (Hint: Trucks are limited to 80 km/h in Germany and there are –unfortunately– still large parts of the highway system in Germany without speed limit so the sports car could leverage its top speed from time to time).

What this example also shows, is that it is important to keep in mind that the actual time it takes to complete an item can very much depend on the current capabilities of the people working on it.

Managing prerequisites and dependencies

Even in the world of agile software development, it is necessary to understand prerequisites and dependencies.

That does not mean that we need Gantt charts or PERT-diagrams with detailed timelines month and years into the future. The well researched cone of uncertainty has shown that these would be useless in a matter of weeks anyway. To me the frustration with the misuse of techniques like Gant charts and PERT diagrams that seems to be one of the drivers of the whole “no estimate” movement.

But using the basic ideas of PERT for example –actively modeling which activity is dependent upon which other activity and which activities can be tackled independently– is still quite helpful. Just mapping out –and agreeing upon– which hard dependencies exist on the next lower level of abstraction makes a huge difference when it comes to succeeding with most real-life multi-step ventures.

Putting it all together: a model for sizing and planning

From my experience, every explanation about how to do planning and sizing right has to be wrong on some level, but maybe some [of the ideas] are useful nonetheless.

A flawed, but sometimes helpful, workflow for the question “When will it be done?”

  1. Cut it down to to chunks reasonably well understood size (analyze, don’t estimate). Some ideas for that could be:
    • using the sizes NFC, 1, and TFB as explained by Pawel Brodzinski instead of t-shirt sizes or other arbitrary numbers
    • using equivalent items from the past is also quite helpful (selecting a couple of reference items, or at least one for each size, to compare future work items with)
  2. Make sure to know about the dependencies between the work items.
  3. Use historical data (e.g. lead time distributions separated per reference item class) to forecast durations. (The closer you get to actually working on the elements the more important it is to use not any historical data, but data from the team (or other subpart of the organization) that will do the work.
  4. Apply confidence intervals (e.g. based on Liz Keogh’s aforementioned scale) to those forecasts.
  5. Apply risk management heuristics (e.g. from “Waltzing with Bears” and maybe, but not necessarily using the Riskology spreadsheet) to the result of the previous steps.
  6. Communicate the result as a probability distribution, or at the very least as a range, not as a single number.
    • If you don’t use the whole distribution try to communicate a date or duration that feels sufficiently safe, like the 80th percentile, and communicate the other ends of the distribution in relation to that point (e.g. “for 8 out of 10 items like this, it should take 21 days, but we might end up three days earlier or -in 2 out of 10 cases- it might take 25 days” - thus you avoid anchoring people on the nano-percent-probabillity of 18 days)
  7. If your endeavor consist of dependent items (and you can’t break the dependencies) consider using something like the PERT-approach without actual dates to plan parallel and sequential parts of the work.

Some people might argue that it would be possible to lump steps 3 to 5 together in just one so-called “Monte-Carlo simulation” as they come out-of-the box in many contemporary tools but there are drawbacks to this approach. (although Riskology also uses a Monte Carlo simulation under the hood),

On the other hand, the black box approach of many of the Monte-Carlo simulations that have been integrated into popular tools makes it very hard to really know what is being calculated, how things are weighted and so on.

Even so, just applying steps 1 through 4 already gives so much better forecasting and planning that, in my book, it is definitely worth the while.

till next time
  Michael Mahlberg


Footer (ignore)

fieldstones

Estimation hast many dimensions: * complexity * effort

planning has other dimensions: * capability of the system (e.g. company or team) * relative cost of not doing things * customer demand (e.g. due dates)

Wednesday, November 15, 2023

Decoding Service Levels: Rethinking SLO, SLE, SLA, and SLM

Language creates reality, and when in comes to the delivery capabilities of organization the language around service levels sometimes makes it hard to improve the reality.

The SLA (Service Level Agreement) has become a very loose Term.

Photo by Sora Shimazaki: https://www.pexels.com/photo/multiracial-colleagues-shaking-hands-at-work-5668838/

Non-sensical sentences like “I expect an SLA of less than two weeks!” for example (when used to underline the fact that the customer wants their delivery within two weeks or less) are way too common and make it hard(er) to discuss capabilities in a helpful manner. At least when we take SLA to mean Service Level Agreement.

A lead time distribution diagram built with post its

Right now, early 2023, not only the SLA, but also the SLE is quite popular - the Service Level Expectation. But again – this term can easily be misused and its definition depends quite a lot on context. This term has recently been taken up by people working with Scrum and Kanban and they definitely have their own interpretation.

Sometimes it is helpful to look at the original meanings of terms and try to use them in a way that is most helpful to the situations at hand.

Back to original meanings

Let’s look at what the terms actually (used to) mean and how they could all fit nicely together:

SLO - Service Level Offering

The speed and quality of service that is offered by someone (a person, a team, or an organization) to someone else.

SLE - Service Level Expectation

This term was commonly used mainly in RFPs (Request For Proposals) and in the context of contracts and offerings and describes which level of service would be expected from someone who bids on this RFP. E.g. “If you want to be our supplier for headlights, than we expect that we will have any headlight we order within 48 hours as long as we don’t order more than 200 pieces.” Today’s interpretation sometimes differs, as can be seen by numerous articles around SLEs and Kanban and Scrum

SLA - Service Level Agreement

This is the actual, mutual agreement between two parties that has been reached after discussing

  • the needs and hopes of the client (the SLE in the original sense of the word) and
  • the capabilities of the provider (the SLO)

Most of the time an SLA that has really been negotiated also takes into consideration what each party considers to be a fair and economically viable compensation for that level of service.

SLM - Service Level Measurement / Monitoring

To make sure that everything still works as the parties involved intended it is a good idea to measure the actual service level to be able to make adjustments as needed.

Why can this be helpful?

Once we return to aknowledging the different needs of the different parties involved, it becomes much easier to arrive at the most important letter of this whole three-letter-acronym (TLA) zoo: the A as in agreement!

When the SLO is left in the hands of the people providing the service they can measure their own capabilities and actually know what they can offer.

When it is clear, that the expectation that the outside world has, can not be defined from within the party providing the service, but only from the outside, then the actual clients can (and have to) define what they need.

And if all parties concerned know what they can do and what they need they will have a much time coming to an agreement.

till next time
  Michael Mahlberg

Wednesday, November 08, 2023

What’s the «Minimum» in MVP (Minumun Viable Product) anyways?

“There can only be one!” (Or not?)

At least one of my friends gets “all highlander” on people who try to talk about having defined the MVPs (plural) for the product strategy and makes a very strong point of the fact that there can only be one (predefined) minimum viable product.

But is that really true? Or is it prudent to say “we’ve defined a couple of MVPs?” Does it make sense to talk about the amount of users who actively “use our MVP?” Or is an MVP a thing that does not really provide any user functionality?

Well – as I like to point out: Context is King. Always. And we have to accept reality as it is. In that vain, I think we have to recognize that people are using all the notions above.

From what I’ve learned over the last couple of years, at least two fundamentally different ideas behind the term MVP are widely spread and of course we also have to take semantic diffusion into account.

The MVP in the world of Lean Startup (2008)

Some green liquid in a scientific jar – Free Image on pexels

With Eric Ries seminal work on The Lean Startup and the adoption of the whole lean startup approach, the term MVP gained real popularity.

Steve Blank, “An MVP is not a Cheaper Product, It’s about Smart Learning” “MVP is whatever you could build to learn the most at a certain time” in this interview at Startup Istanbul

According to Alex Osterwalder’s ideas for example, an MVP is “A model of a value proposition designed specifically to validate or invalidate one or more hypotheses” – at least in the context of value proposition design so I would argue, that there is some merit to this position.

The MVP in the world of product development (2001)

An old time cash register  – Photo by Ramiro Mendes on Unsplash

Way before Lean Startup, in 2001, Frank Robinson published a definition of the term MVP that is much closer to the concept I have most often heard associated with the term by people who have not been exposed to the ideas of Eric Ries :

The smallest product that will actually be sellable.

The problem with this outlook is of course, that the risk-issues Steve Blank and Eric Ries point out in their work are not at all addressed by this approach.

For those interested: the original wording from Robinson was > “ The MVP is the right-sized product for your company and your customer. It is big enough to cause adoption, satisfaction, and sales, but not so big as to be bloated and risky. Technically, it is the product with maximum ROI divided by risk. The MVP is determined by revenue-weighting major features across your most relevant customers, not aggregating all requests for all features from all customers.

For a more in-depth discussion of the topic I recommend reading through product board’s article about MVP and through the product schools’s comparision between MVP and prototype

Given this point of view, I’m inclined to argue that there is value in this position as well.

Other terms that might help

In the 2004 book software by numbers Jane Clelans-Huag and Mark Denne introduced the term minimum marketable feature (MMF) that nicely describes what is often meant when people talk about MVPs:

A chunk of functionality that can be sold together and makes sense for a potential customer to buy (Paraphrased by me)

till next time
  Michael Mahlberg

Sunday, November 20, 2022

How can I rent a 50 foot yacht (or get a job as a scrum master) if I have no experience? You shouldn’t!

Recently, a successful speaker and trainer –whom I also happen to know personally– posted a (German) article on linkedIn, where he “replied” to the many questions he got on “How do I find a job as a Scrum Master if I have no experience?”

He actually did give suggestions. And that is what really made me sad, because the question alone already highlights much of what's wrong in today's post-agile world.

In my opinion the only right answer would have been: "You shouldn't!”

aerial view of green body of water with sank ship photo – Free Image on Unsplash

To me, the whole "How can I get a job as a scrum master if I don't have any scrum master experience yet? It's so unfair that they all expect me to have experience." is fundamentally the wrong question to ask.
It is like asking “How can I rent a sailing yacht if I don't have any sailing experience yet? It's so unfair that they all expect me to have experience. How should I ever get the experience if they don’t let me try it out?” or maybe even "How do I get a job as a surgeon if I don't have any experience?"

There are many jobs for which you do need experience.

Let's look at what a master used to be:

In most areas (university excluded) you become a master after you've been an apprentice (usually for three years) and after completing your journeyperson’s time (in Germany usually also three years). After that, you have to pass an examination and deliver a so–called masterpiece.

In the agile realm people can become a “Master” (at least a Scrum Master) after a two-day training course.

The people who defined Scrum (around 1995) were part of the group that wrote the Manifesto for Agile Software Development, so it's safe to assume that they also co-created the first page that starts with "We are uncovering better ways of developing software by doing it and helping others do it.” If one believes that sentence, how much sense does it make to have people who don’t have any actual experience doing it train and coach other people in things they never experienced themselves?

If you look at the original idea of a Scrum Master you will find that the Scrum Master is –to pick just a few items– meant to

  • [be …] accountable for establishing Scrum
  • help[ing] everyone understand Scrum theory and practice, both within the Scrum Team and the organization.
  • enabling the Scrum Team to improve its practices, within the Scrum framework
  • [be] leading, training, and coaching the organization in its Scrum adoption
  • for more: see the current version of the scrum guide

All of these things are pretty hard to do if you only know them from theory. For similar reasons maritime law makes sure that even though the first journey of a skipper is their first journey as a skipper, it is by far not their first journey in an active role on a ship. In the same vein, new Scrum Masters really ought to have experienced the environment from numerous roles to fulfill the expectations laid out in the framework.

To quote one of the original books on Scrum by Ken Schwaber (Used to be required reading for getting the Scrum Master certification in the olden days):

The Team Leader, Project Leader, or Project Manager often assume the Scrum Master role. Scrum provides this person with a structure to effectively carry out Scrum’s new way of building systems. If it is likely that many impediments will have to be initially removed, this position may need to be filled by a senior manager or Scrum consultant. Schwaber2001, p. 32

On the other hand, especially that innocous “enabling the Scrum Team to improve its practices” from the bullet-list above implies (according to most, but not all, certified scrum trainers (CSTs) I know) all the stuff from the technical side of agile as well.

So please, if we want to achieve the goals we had in the early 2000s, when “lightweight processes” –as they were called before 2001— became “Agile Software Development”, then let’s stop with dishing out the idea that the person who is intended to help people get better at the game, can learn what to do in a couple of days and “on the job.” Let’s be realistic and tell people that they should go through the path of apprentice and journeyperson themselves before they start acting in roles that are designed to be held by experienced people.

Because of all of this, in my opinion, the best answer to the original question “How do I find a job as a Scrum Master if I have no experience?” should have been “You shouldn't.” Amended with the suggestion of better questions to ask.

To me one better question would be: “How can I get the experience that is necessary to be an effective and useful scrum master?” (From my point of view, working as a developer, tester, subject matter expert or maybe even as an intern in an environment that actually has a working(!) Scrum setup are some good ways to get experience.)

In my experience, the (few) people who ask this latter question and try to get that experience usually don't have any problem with job offers – except that they might get too many. After they did get the experience…

till next time
  Michael Mahlberg

Sunday, September 18, 2022

Coach? Consultant? Trainer?

Language is a funny thing.

As philosopher Wittgenstein said "The limits of my language mean the limits of my world."1

Or, to take another angle, as Steve Young put it "Perception is reality" 2

Without wanting to re-iterate my whole earlier post, I would just like to shine a light to the fact that outside the agile realm the coach is much more prevalant in sports than in psychology (e.g. life-coaching).

And a sports coach acts quite differently from a life coach. Can you imagine a group of people who hire a coach because they want to become a soccer team and that coach would start by asking everyone how they think soccer should be played? If the distance of the goals is to their liking and whether a ball would be the best thing to play with?

If you can imagine this scenario, then I guess, it is either with a sarcastic glance at the way many agile coaches work today or you where reminded of some kind of comedy.

Life coaching, solution focused coaching, systemic coaching all have their places – even in soccer coaching – but usually not in the beginning when the players still are unaware of the rules of the game, not well versed in the moves and inexperienced.

And by the way: the oldest mention of a coach in what later came to be the agile realm was from eXtreme Programming (XP). To quote my aforementioned article and paraphrase from eXtreme Programming explained:

“... the [coache’s] job duties are as follows:

  • Be available as a development [programming] partner [...]
  • [make refactoring happen]
  • Help programmers with individual technical skills, like testing, formatting, and refactoring
  • Explain the process to upper-level managers.”

or – on a later page – “Sometimes, however, you must be direct, direct to the point of rudeness. [...] the only cure is plain speaking.” And also “[...]I am always in the position of teaching the skills [...] But once the skills are there my job is mostly reminding the team of the way they said they wanted to act in various situations. The role of the coach diminishes as the team matures.”(p 146)

So maybe – just maybe – it would helpful to be aware whether the team needs a sports coach or a therapeutic coach.

I find that both are appropriate at different points in time, but I have seen a lot of cases recently, where the client was looking for –and needed– a coach akin to the sports-coach metaphor, ended up with a coach conforming to the life-coaching metaphor and everyone just ended up really unhappy.

till next time
  Michael Mahlberg


  1. "Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt", – Ludwig Wittgenstein, Tractatus logico philosophicus, 5.6, 1922.↩︎

  2. "Perception is reality. If you are perceived to be something, you might as well be it because that's the truth in people's minds." - Steve Young↩︎

Sunday, May 08, 2022

«Creating Feedback Loops» is not about having meetings

In many modern approaches to work, like The Kanban Method, Lean Startup, Agile Software Development, or DevOps, Feedback is an essential part of the approach.

Sometimes the role of feedback is explicit, whereas in some cases it is more of an implicit assumption that is only visible upon deeper inspection.

The Kanban Method has it pointed out explicitly as (currently) the fifth practice (Establish feedback loops) while the DevOps movement has one of its "three ways of DevOps" dedicated to it (The second way: the principles of feedback) which in itself consists of five principles.

“Let’s have more meetings” – a common misconception

Unfortunately, some of the currently popular approaches have introduced the notion that implementing feedback loops implies having some special meetings for feedback.

Feedback like that, could be a daily meeting regarding the current status of the work – especially focusing on problems or things to solve, or an event-based meetings, like post deployment retrospection.

For example, if you look into The Kanban Method, you'll find a whole slew of other meetings to be held at different cadences to foster more feedback in your work.

While these meetings can be very helpful, they are not at all the best way to get real feedaback, really quick.

The problem with meetings as the primary source of feedback

The trouble with feedback that only comes periodically, and is dependent on human interaction, is that most of the time it comes too late.

Consider some feedback loops from outside the work organization world:

  • The speedometer of your car gives you feedback about your current speed – just waiting for the speeding tickets to come in would be way too slow as a feedback loop.
  • Or how about another thing in your car that you get information about: the oil in the motor via the oil warning lamp and the oil dipstick. For certain kinds of information the dipstick, at which we look from time to time gives us enough feedback. For the important short-term feedback that the oil pressure is too low, we need faster feedback. That's why your car comes with an oil pressure warning lamp.

How can we create feedback loops inherent in the ways we work?

What we actually want when we talk about feedback, is usually a very prompt response from the system we are interacting with. This system can be anything from a technical system through a physical system or a mechanical system to a system consisting of people interacting with one another.

One of the best ways to get early feedback is to actually remove inventory.

You may have heard that removing inventory is a central tenant of all the lean approaches, but when thinking specifically about feedback, removing inventory has the added benefit of making sure that we get our feedback earlier.

So really, what we mean by “creating feedback loops” is finding ways to see the final impact of the things we just did as early as possible instead of waiting for the effects to happen somewhere very far down stream.

till next time
  Michael Mahlberg

Sunday, March 13, 2022

Three strategies to ease the meeting pain

“Since we started the new approach, I hardly ever get any work done, because we have so many meetings.” That is a sentiment, I here quite often when I’m visiting clients who have just started with some new approach. Surprisingly often that is the case if that new approrach is some flavor of “Agile.”

This seems more frequent if the client is a large corporation, but it certainly also happens at startups and SMEs.

And yet, on the other hand it seems to be increasingly hard to get any meetings scheduled. Let’s look at some approaches to make things a bit more manageable again

Once we start to differentiate between meetings that generate work and meetings that get work done it starts to get easier to handle the workload.

As described below, once we start making that distinction we can apply strategies like

  • planning the Work instead of the meetings (allocating time in my calendar for “getting stuff done” – especially helpful when applied –and negotiated– on a team or even multi-team level)
  • conscious capacity allocation (I will have 3,5 hours of working time and 3,5 hours of meeting time each day)
  • Actively keeping buffers open for unexpected, short term interactions (Putting blockers in my calendar that I remove only shortly before they are due)

Now let’s look at these strategies in detail:

Two types of meetings

Some people (maybe many) tend to view all meetings as “a waste of time” and “not real work” – I beg to differ.
I would say that we need to differentiate between meetings that leave us with more work than before and meetings that leave us with less work than before.

Work generating meetings (coordination time)

Some meetings leave us with more work than we had when before we attended the meeting.

  • Planning meetings, where the actual purpose of the meeting is to find or define work that needs to be done.
  • Status meetings, where the original intention is just to ”get in sync” but where it often happens that someone realized: ”oh, and we have to do X”
  • Knowledge sharing meetings, where not everyone affected is invited and thus we need to share the knowledge again.
  • Knowledge building and gathering meetings where the purpose is to better understand something, we didn’t fully understand before – be it a user interview in a product development company, a design session for something be build ourselves, some kind of process improvement meeting, or something else in the same vain.

This list is of course by no means conclusive, but it should give you an idea of the kind of meetings that could be put in that category.

Meetings that get work done (creation time)

On the other hand there are meetings that actually get work done. Especially for work that needs more than one person to complete it.

  • Design Sessions that end with decisions.
  • Pair-Writing an article or a piece of software
  • Co-creating an outline for an offer
  • Co-Creating the calculations for next years budget (if your company still does budgeting the old way)

Try not to mix the two types of meetings. At least not too much. Especially try to make the second kind of meeting really a meeting that gets work done. As in done-done. Make sure that there is no ”X will write this up, and we’ll review it it two days.”

If it’s good enough in the meeting, it’s probably good enough for work.

If we introduce some kind of follow-up work, especially follow up-work that has to be reviewed again, we actually prevent people from using the result of the work we just did in that meeting. Try to make it “good enough for now” and then let’s get on with creating value at other places.

And if it takes too long to create those documents in the Meeting with the tools you have available in the meeting, you probably have some great opportunity to re-think your choice of tools.

With this in mind, let’s look at the three strategies in a bit more detail.

And even though the strategies are persented in a specific order, there is no real ordering between them. Each of them works well on it’s own and you can combine them in any possible way.

Strategy one: Plan the work, not the meetings

Even if you apply only this one strategy it can be a real game changer.
Instead of keeping your agenda open for meetings and then work during the few times where no meeting is scheduled, no meeting needs preparation, and no meeting needs post-processing, switch it around.

Start by filling your schedule with “creation time” – time slots where you intend to do the part of your work that directly creates stuff. When you’re a knowledge worker in the times of a pandemic, this might also include meetings, but those should be only meetings that create tangible results. (This could be a design session with colleagues if you’re in manufacturing, it could be an editing session on a paper if you’re in academia, or maybe a pair- or mob- (ensemble) programming session if you’re in software development. Any meeting that outputs work.)

Only after you filled your schedule with a reasonable amount of time allocated to ”creation time” fit those other things, that I like to call “coordination time”, in some of the remaining spaces on your calendar.

This “coordination time” can include planning, status updates, learning and agreeing upon how you want to do things, understanding the challenge you’re currently working on, and so on. It is basically the coordination you need to efficiently get stuff done in the “creation time.”

Some people tend to call only “creation time” Work and the rest of the time Meetings. However, meetings that neither add value through creation nor through a better understanding of who is doing what when and how, should be eliminated altogether. And maybe replaced by an e-mail or

Especially when we work on process improvements or introduce new approaches we tend to start by planning when the related events (or ceremonies to use an older term ;-) ) should occur to include all the necessary participants.

I suggest to first try to agree upon the times out when all the participants can do their “creation work” and then fit the events and other necessary meetings around that.

Combining this approach with a conscious allocation on capacity makes it even more powerful.

Strategy two: Allocate capacity consciously

Don’t just look at the days of the week as a long stream of hours passing by. Make a conscious decision on how to invest the time beforehand.

If you’re involved with some kind of process framework you probably have some of the time allocation already done for you “daily standups”, “plannings”, “review” and “retrospectives” to name but a few.

But is the rest of the time really uniform? For most of us it isn’t. It consists of periods where I can just chop away at my work, of periods where I need information from other people and of periods where other people need information from me.

Creating even an informal and rough plan of how you intend to allocate your time helps a lot in reasoning about the number of meetings and makes the gut feeling a lot more tangible and negotiable.

Such a rough and informal plan might just look like this:

Allocation per Week (on average)
Process related       4h (8h in total every two weeks)
Creating stuff       20h (4h per day)
Helping others       10h (2h per day)
Slack for surprises   6h (a bit over an hour per day)

With this little list it is already much easier to argue for or against meetings. And if we start tracking how we actually use our time against this list, it usually gets even more helpful. You might want to give it a try.

Strategy three: Plan your slack ahead of time

Just put “Slack Spacers” in your agenda and remove shortly before their time comes up. This way if someone asks you whether you have time for them today you might well be able to say “yes” without having to move any other appointments.

To be able to react to things that are happening every systems needs some slack. If there is not enough slack in the system every little disruption or interference will wreak havoc on the system and might even result in a total system breakdown.

Back in the seventies it was “common knowledge” that in knowledge work one should never plan out more than 60% of one’s day. Simply because “things will happen.” How does that fit in with calendars that are filled up to the brim for the next two weeks?

If you allocate specific times for “creation work” and put them in your calendar you might already have one thing that absorbs some of the “things that happen”, but that’s not always quite what you intended to do with those allocated time slots.

A simple and effective strategy to deal with this is the usage of “Slack Spacers” – appointments with yourself, that are just in your agenda to make sure you don’t plan too much of time too far in advance.

Those could go from 30 minute slices which you remove on the evening of the day before they come up to 4 hour slots twice a week which you remove on Sunday evenings. Or any other sizing and timing that works for you.

Depending on your environment you might either declare them for what they are or hide them behind inconspicuous titles like “Preparation for the XYZ project.”

Wrap-up

So these are three strategies you could put into effect right now

  • Foster collaboration by planning the time you work together
  • Get control of the amount of work you can do by allocating capacity deliberately
  • Create maneuverability by explicitly blocking time for work that shows up unannounced.

till next time
  Michael Mahlberg

Sunday, February 27, 2022

Unplanned work is killing us – really?

One of the things I often hear teams complain about is the amount of unplanned work they have to handle.

Drowning in irrefutable small requests

This unplanned work also frequently seems to be “irrefutable.” But is it? What does it mean to take up an unlimited amount of irrefutable work that has to be done right away?

Starting a new task immediately when it arrives means that you either have been idle when it arrived or –just as plausible– you had to put the stuff you were working on to the side. As long as you only have one item of irrefutable work at a time that might work. However the problem begins as soon as the next piece of unplanned work arrives before you were able to complete the current one.

In this situation you’re most probably not idle (since you’re working on the previous irrefutable piece of work) and you can’t easily put away your current work (because, well, it is also irrefutable).

This dynamic usually leads to a cascade of interrupted work that has been labeled as “irrefutable” and that still gets tossed in the “waiting bin” at the back end.

Most of the time, deciding for stuff to hunker in some “waiting“ state late in the process makes the “client” unhappy – the very person who insisted on the the irrefutability of the work.

This problem gets worse because often there isn’t any time to inform the original client that their work has been paused. After all, the new piece of irrefutable work had to be started immediately!

Thus, even though people try to work on the requirements coming at them as fast as they can it seems to be an uphill battle without much chance of ever getting a grip on the work.

But is that really the only way?

Accept reality

Once we face the fact that in these situations things will take longer to be completed than the mere net working time, we can employ other approaches to get on top of the situation.

There is this seemingly little trick that enables us to transform unplanned work into planned work. It’s called Planning. And the cool thing is that it doesn’t have to be big.

Once you know how many irrefutable small request usually land in your lap each day you can re-structure your day to handle them way more effectively.

You can get that number either from your gut feeling, or from some simple kind of low tech metric like tally marks on a sticky-note near your keyboard. Or maybe just start with an arbitrary guess and iterate towards better numbers later.

Planning to plan

So if you come to the conclusion that if all that work came in structured you could do it in 2 hours a day on average, there are two structural elements you could introduce to your daily structure to handle this

  • Firstly block out those two hours from your schedule. You will lose 2 hours per day anyway in which you will not be working on standard work. This is part of the “accept reality” thinking.
  • Set aside a couple of minutes for planning when you will work on these items and for feedback every couple of hours. Assuming you work 8 hours a day, I would take 5 minutes every two hours for “planning” which leaves us with 2 planning events per day.

All you do in these 5 Minutes is a quick check whether the requests actually fall into the category of “small” request.

If they do, schedule them for later today or next day, based on a rough guesstimation of the amount of work you already scheduled for the respective window and the perceived importance of the task. After scheduling the request you might want to let the client know that you scheduled the item and for when.

If they are not of the category “small” you have a different problem at hand – here you might still want to reserve a small amount of time in the 2 hour window to draft a more detailed feedback on why this request has to be discussed on another level. Still, you do this answering as a planned activity.

With just accepting that the two hours you ‘lose’ per day are actually lost for standard work and subtracting 10 more standard-work-minutes from your working day, you can probably convert 90% of your unplanned work into planned work. Without adding to the actual customer lead time of the items that used to ruin your day in the form of unplanned work.

And as almost every situation is unique, you most probably will have to come up with different numbers, but the general principles statet here should be applicable to most situations.

till next time
  Michael Mahlberg

Sunday, February 13, 2022

Is the user story overrated? Some story patterns and formats to learn from

The term “User Story” or simply “Story” as a shorthand for a requirement has become quite widespread these days. But what does it actually mean and how can we benefit best from it?

We all know, what a story is, don’t we?

Let’s try this one on, for size:

“Once upon a time, there was… here goes the story … ever after”

That’s the kind of story that most people in the real world think about, when they hear the term “story.”

In the agile realm stories seem to be a different kind of beast

As I point out below, my personal recommendation is something quite different, but in the realm of Agile, stories seem to be something other than in the rest of the world. Within the realm of Agile, the majority of people seem to believe that the “requirements packaged in the form of a story” is the central element that everything revolves around.

That extends so far, that even the “speed” of development teams is (way too) often measured in something called story-points – even though at least one of the potential inventors of the story-point concept says “I […]may have invented story points, and if I did, I’m sorry now.

And almost everyone in that realm, as well as in its adjacent territories, have –at one time or the other– heard the stipulation that a well-crafted story

  • starts with “As a <role>…”,
  • has an important “…I want <System behavior>…” in the middle
  • and –in the better cases– ends with “…so that <desired business effect>.”

So – why is this incarnation of the concept “story” so prevalent in the realm of Agile? And is it really the best way to handle requirements in contemporary endeavors? To write better stories today, we need to have a look at how stories came to be such an important instrument in the realm of “Agile Software Development”1 in the first place.

How stories came to software development

Back in the day, before the “Manifesto for Agile Software Development” was written, there were several approaches whose champions called their movement “lightweight software development” and who would later come together and write down what unified their approaches under the moniker “Agile Software Development.” These approaches used all kinds of helpful ways to describe what the system should be able to do.

In Scrum they had the PBI (Product Backlog Item), in Crystal the use case was somewhat prominent, other approaches used comparable artifacts. Extreme Programming was the one that used something called a User Story.

This concept of the user story somehow had such an appeal, that many of the other approaches embraced the idea – more or less.

It was more about the telling, than about the story

A key component behind the idea to use “stories” has even made it into the Manifesto for Agile Software Development – To quote the sixth principle from that manifesto

“The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.”

Before the recommendation that requirements should be talked about was written down in that form, it was embodied in ideas like CCC Card – Conversation – Confirmation or the nice quote from the Book XP-Installed from the year 2000 that a card is a promise to have a “series of conversations about the topic.”

Unfortunately, in today’s world the concept of On-site customers often has been reduced to a person who is called Product Owner but doesn’t have any real business authority and spends about two hours with the team every two weeks. Under these circumstances it seems questionable whether this approach to product development is still viable for all cases.

But I am convinced that understanding why it was okay to write only one sentence to represent a complex requirement back in the early days of lightweight methods helps a lot with writing good stories today.

The fact that the way of working that lead to the original user story is hardly feasible in today’s “corporate agile” with all its compromises, has a direct impact here. It implies that we need something more than just the concept of a “User Story” if we want to capture and process requirements in an efficient manner.

Don’t put the story in the center, focus on the value and the work item

What most approaches propose, is some container that represents “value for someone.” In the process framework Scrum this is called Product Backlog Item, in more general approaches –like the Kanban Method– it is often simply called Work Item.

Such a work item –to go with the broader term– can have many structures. A few common attributes of many such item types are:

Of course, one of the attributes needs to be the actual requirement. And that could be represented by a story. But does that have to be a user story? Actually, there are some pretty helpful alternatives out there.

If you use some kind of story, get to know several types of stories well

As it is often the case, the habitat of the original user story provided many things that were no longer present once the concept was mimicked elsewhere. And as time went by, some people re-discovered what a story could mean for them. Some other people –many, actually– got confused by the story concept since they never really saw it in action and only knew about it through very indirect word of mouth.

Stakeholder Story

After the “As a «role» I want…” format for user stories had been around for quite a while, Liz Keogh pointed out that many of the so called user stories out there are not actual user stories but instead Stakeholder Stories.

  • Format of the Stakeholder Story
    • Liz Keogh described her ideas and observations in the 2010 Article “They’re not User Stories.”

    • The generic form of this kind of story –the way I use it these days– is

      • In order to «the required business effect»

      • «some stakeholder or stakeholder persona»

      • «wants,need,requires,…» «some kind of system behavior or future state»

  • Context for the Stakeholder Story
    • This is an extremely useful perspective if you have to describe requirements that are not actually wanted by the end user of the system, or that don’t actually have a direct user interaction.
    • Most of the requirements I encounter in enterprise contexts are more stakeholder-driven than user driven. (Legal requirement for example. Something like “To avoid being sued for GDPR violations our CISO requires that we have some GDPR-compliant deletion mechanisms that could be executed at least manually if ever a user actually should file a complaint that conforms to article 17 of the GDPR.”)
  • Caveats for the Stakeholder Story
    • The stakeholder should be as tangible and concrete as possible. Unlike with the model of personas in user stories for stakeholders in user stories, it is extremely helpful to name a real person for stakeholders in stakeholder-stories.
  • What to avoid for the Stakeholder Story
    • The most common problem I see with stakeholder stories these days is that the required business effect gets confused mixed up with the system behavior or future state.

User Story

It was probably Mike Cohn who popularized the now so common form of user stories in his 2004 and 2005 books “User Stories Applied” and “Agile Estimation and Planning” but to my knowledge Rachel Davies came up with it around 2002 at Connextra (actually that’s also what Mike Cohn’s post about the three part user story tells us)

  • Format of the User Story
    • The now prevalent way to capture user stories is the well known

    • “As a «role or persona» I want «system behavior» so that «desired business outcome».”

    • This is described (amongst other sources) in the often quoted Article Why the Three-Part User Story Template Works So Well by Mike Cohn.

  • Context for the User Story in this sense
    • Helpful if you really have a product (sometimes a project and seldomly a service) that has actual interactions with actual users
  • Caveats for the User Story in this sense
    • It should describe an interaction between a user and a system that will be possible after the requirements has been implemented.
  • What to avoid for the User Story
    • A story like “As a team member, I want another team member to implement the database logic for the WhatNotField so that it will be available” is using the format alright, but misses almost all point of using User Stories.

Job Story

To my knowledge the whole “Jobs to be Done” way of approaching product challenges became popularized through Alex Osterwalder’s work with Strategyzer around the value proposition canvas. [Please let me know, if you know the whole back-story, I’d be really interested in learning about that] Soon after that the JTBD idea proved so powerful that it spawned it’s own community.

Thanks to my esteemed colleague Matthias I learned about the job story format and the whole idea of using job stories to work on product ideas

  • Format of the Job Story
    • The article Replacing The User Story With The Job Story describe the idea of the Job Story as separating situation, motivation and expected outcome by using the format
    • When ________, (the situation part)

    • I want to ________ (the motivation part)

    • so I can ________ (the expected outcome part)

  • Context for the Job Story
    • Good for very young stories, when you still try to figure out what you’re really talking about.
  • Caveats for the Job Story
    • Unlike Stakeholder Stories and User Stories, Job Stories don’t (yet) provide an easy way to fill out the ________ part, so you really need to dive into the ideas outlined in the above mentioned articles and there can be a lot of discussion about the “right” way to write such a story.
  • What to avoid for the Job Story
    • Don’t treat it like a piece of functionality that just needs to be executed. Job Stories make for good candidates or the narrative flow of Story Maps. There’s also an 2-page summary explanation of Story Maps if you want to know more about that concept.

Of course this only covers some aspects of the usage of stories in todays post-agile society, and I would strongly encourage anyone to look (deeply) into the stuff about INVEST and SMART and at User Story Mapping, to get event more background with regard to working effectively with stories to represent aspects of requirements, but I hope this article gives you some ideas on when and how to use some other kinds of stories to represent requirements that are really hard to fit in the “As a «Role» I want…” format.

till next time
  Michael Mahlberg


  1. (Remember: There is not really an Agile Manifesto)↩︎

Sunday, January 30, 2022

There is no Agile Manifesto

Just a little reminder: what many people nowadays think is a way of living or even a way of designing whole organisations was originally something quite different…

What most people call “The Agile Manifesto” actually has a title.

it is called Manifesto for Agile Software Development

And its authors propose the “Twelve Principles of Agile Software.

  • It does not specify a defined approach to continuous improvement – TPS (Toyota Production System) does that, for example
  • It does not elaborate on good ways to optimize lead times – The ToC (Theory of Constraints) does that, for example
  • It does not express any opinion on how a company should be structured in the post-Taylor era – Sociocracy and its derivates do that for example. So does New Work
  • It does not tell anyone how to handle finances without upfront budget plans – Beyond Budgeting does that, for example

And all of the approaches on the right hand side came into existence long before 2001, the year the “Manifesto for Agile Software Development“ was drafted.

If you look a bit further on the original web-page that launched the term “Agile” into the world, you’ll find that in the section “About the Manifesto” as well as in the headline above the twelve principles, it has been called “The Agile Manifesto” even by its authors. Maybe this helps explaining some of the confusion.

Personally, I find it very helpful to remember the context where the whole idea of “Agile” came from – maybe it’s helpful for you, too.

till next time
  Michael Mahlberg

Sunday, May 16, 2021

The difference between acceptance criteria and the definition of done

When it comes to building things, we often want to know when it's really done. Two terms have gained popularity over the last couple of years within the realms of software development and other areas that use spillover ideas from the agile movement. These two concepts are acceptance criteria and the definition of done. Unfortunately those concepts are often mixed up which leads to subpar results.

The distinction can be pretty short: the definition of done (DoD) is a property of a process-step, while acceptance criteria are properties of a request for a capability. However, the question remains: why does it matter?

Let’s clarify some terms

I intentionally used the uncommon way to refer to a requirement as “a request for a capability“ to avoid notions such as story, requirement, feature, epic etc. Sometimes just saying what we actually mean instead of using an overused metaphor can make things much clearer. For now I will call “requests for a capability” simply work items, since that term has –at least up until now– very few connotations.

Where does the definition of done come from, and what does it mean?

To be perfectly honest I don't exactly know where the phrase came from. (I'll come back to Scrum in the postscriptum below) I've heard jokes about “If it works on your machine, you're about 80% done” and “do you mean done, or do you mean done done“ since the 80s. So obviously it's not really a new phenomenon that it's hard to tell when something really is done.

The term became more formalized, especially in the Scrum community between 2005 and 2011, when “Definition of done” became a top-level topic with it’s own heading in the scrum-guide. In this context the definition of done is the sum of all quality requirements a work item has to fullfil to be considered “done.”

If we look at it from a process perspective, this is a policy all work items have to comply with before they can move from “working on it” to “done.”

where the DoD applies

Who brought us acceptance criteria, and why?

Again, the origins are lost in the depth of time. At least to me. But the first experiences I had with them as a part of agile software development were back in my earlier XP-days, around the turn of the century.

At that time it was “common practice” (at the places I was around) to put requirements on cards. And when the time came to find the answer to “how would you know that this item was done” with the onsite customer, we just flipped over the card and jotted his acceptance criteria on the back of the card.

Those acceptance criteria hardly ever included anything technical, let alone any requirements regarding the documentation or in which source code repository it should reside. Those things were captured by our working agreements. In a section that nowadays would be called definition of done.

The acceptance criteria usually were things the customer would be able to do with the system once the requirement had been implemented. Someting like: “I can see the list of all unbooked rooms in an area when I search by zip code“ as one acceptance criterion for a card called “find available rooms” in a booking system.

Remember that these were the days of real on-site customers in a high trust environment and stories were written according to the CCC idea of Card – Conversation – Confirmation. Therefore it was quite okay to have such a vague acceptance criterion, where there was no up-front definition of what a “search by zip-code” actually means or how the “unbooked rooms” state had to be determined.

Nowadays these acceptance criteria are sometimes formulated as BDD or ATDD style scenarios and examples, wich allows for very concrete and specific acceptance criteria (but without enforcing them).

Now, what is the difference between acceptance criteria and the definition of done?

After we defined the terms, the terse explanation from above “the definition of done (DoD) is a property of a process-step while acceptance criteria are properties of a request for a capability” might actually make sense.

So, the «defintion of done» is a rule that applies to all the work items in the system and is a policy on a very specific edge between two columns on a board, namely the edge separating the “done” column from the one right before it. In contrast, «acceptance criteria» give the answer to the question “what functionality does the system have to provide to consider this work-item to conform to the customers expectations?”

And so, both are necessary and neither is a replacement for the other. Acceptance critery go with work items, and the definition of done goes with the system.

till next time
  Michael Mahlberg

P.S. In most real life settings, processes tend to have more policies than just the definition of done.

And some of them describe the expections at process boundaries. If you use the Kanban method to model these processes you would naturally make these policies explicit as well, like I described in an earlier post.

P.P.S.: Scrum didn't start of with the now prominent Definition of Done as a first class citizen.

In the original books, that used to be literally required reading for aspiring scrum masters in 2005 –Agile Software Development With Scrum[ASD] and Agile Project Management with Scrum [APM]– there is “Done” on the same level as “Burndown”,“Iteration”,“Chicken” and “Pig" [APM, p141] and no notion of "Definition of Done" in either of the books.

Even in the Scrum Guide from 2010 –one year before the DoD moved up and got its own headline– there are paragraphs like

If everyone doesn’t know what the definition of “done” is, the other two legs of empirical process control don’t work. When someone describes something as done, everyone must understand what done means.

But still not yet quite the now seemingly well established term “Definition of Done” that we see today.

Sunday, March 14, 2021

Options can be expensive -- not only at the stock market

What do you actually get, when you buy a cinema ticket? (In those ancient times when cinemas were still a thing)

You buy yourself an option. The right --but not the obligation-- to execute an operation at a later time. In this case the right to watch a certain movie at a certain time.

The cinema, on the other hand, sells a commitment. They are (to a degree) obliged to actually show that specific movie at the stipulated time. If we look at it like this, it is a considerable investment the cinema promises, in exchange for those couple of bucks your option costs.

And while it is often thought to be helpful to think in options, it is also almost always important to make sure that you're on the right side of that transaction.

Where's the problem with options?

What does that mean for our day-to-day actions? If we hand out options too freely, we quickly end up in a quagmire of "maybes" that is hard to get out of. As I mentioned in an earlier post, the whole thinking around real options in the way Olav Maassen and Chris Matts describe in their book "Commitment", is quite fascinating and well worth a read. But for today let's just look at one thing we don't do often enough, when we use options in our personal lives.

We tend to offer options without an expiry date. And that can leave us with a huge amount of commitments, and very few options left for ourselves. One of the prime offenders here is doodle (or similar meeting planners) and the way they are often used these days. Just the other day I got a doodle poll for 58 30-minute slots stretched over two weeks. Scheduled in about six months from now. And the closing date for these options was meant to be set three <sic> months in the future. So worst case, I would have committed to keep 29 hours blocked for three months. Which would have left me unable to actually plan anything for those weeks in the next three months.

Of course doodle only makes this visible – it happens all the time. Look at this scenario:

  • We could go on vacation either at the beginning of the summer break or at the end

  • I could renovate the shelter either in the beginning of the summer break or towards the end

  • Our kids could go on their "no parents camping weekend" either in the beginning of the summer break or at the end

For as long as you don't decide the first one of these, those options create a deadlock.

And the situation makes it almost impossible to actually decide anything related to the summer break as well, for that matter.

Set an expiration date to ease the pain

The solution is simple, really. But it takes some uncommon behavior to apply it. Let's look at the way the stock market handles options. Options at the stock market have a window of execution and an expiry date. Once that date has passed the option can no longer be converted. Merely adding this expiry date, already mitigates the risk of too many open ended options even for the side which holds the commitment end of it.

A lot of options that we encounter have this attribute of an expiration date in one way or another: When we get a quote for some repair work for our house, car or even bicycle, it usually says "valid until." The same is true for business offers, medical quotes, and almost everything we consider as "business."

Amending the options we hand out with expiration dates, even if it is not in a formal business setting, may feel a little strange at first. But it makes life so much easier. Whether it's toward a colleague, a significant other, friends or even yourself. Reducing the amount of open options also reduces the number of times you have to say "I don't know yet, I might have another engagement."

till next time
  Michael Mahlberg

Sunday, February 14, 2021

How to do physical {Kanban|Task|Scrum} boards virtually

As I’ve mentioned earlier most of the time it is a good idea to start visualization with a physical board. And very often it is a good idea to stick with that. For all the reasons that make it worthwhile to start with it.

One of the biggest advantages of a physical board is the one thing that command and control organizations perceive as it’s biggest drawback: A physical board knows nothing about the process.

The fact that the physical board knows nothing about the process forces the people who use it, to actually know about their working agreements. And to negotiate them. And to make them explicit in some way. Well, at least if the want to remember the gist of their long discussion from Friday afternoon on Wednesday morning.

As my esteemed colleague Simon Kühn put it all those years back: The intelligence is in front of the board.

But we’re all working remote now

Now, that we‘re not in a shared office space anymore, real physical boards are hard to do, aren’t they? Well – yes and no. If you look at the important advantages of physical boards, they are easy to replicate with today’s electronic white board solutions.

Whether you use use google drawings, miro, or conceptboard –to name just the ones I’m familiar with– is mostly a question of taste and, more importantly, legal and company policy considerations.

Using a simple collaborative whiteboard enables people to have most of the advantages they can have from physical board, while retaining the advantages of remote work.

What are the big advantages of a physical board?

A physical board can easily be changed by everyone. Just pick up a marker and draw something on it. The same is true for electronic whiteboards. In both cases it is a good idea to also have a talk with your colleagues to make sure they are okay with the additional (or removed) thing you did to the board.

One could say “individuals and interaction over workflows (process) embedded somewhere in a ticket system (tool)” – just to reiterate the first value pair from the Manifesto for Agile Software Development as an argument for “physical” boards.

Physical boards have extremely quick iterations. Trying out whether a new policy makes sense takes just a pen, a sticky note, a quick discussion and a couple of days (sometimes only hours) to see if it works. Conversely, with ticket systems even proposing a change to the admins often takes weeks and needs a predefined concept and sign-off. Not exactly agile. But with electronic whiteboards you can do just the same things you would do on a physical board. Which is why they provide tremendously quick feedback loops.

And as Boyd’s law of iteration says: speed of iteration beats quality of iteration.

If you decide to add a new status on a physical board or add new meta-information on a ticket, you don’t have to migrate all the old tickets. And you don’t have to coordinate that meta-information with the names of the meta-information of all other projects in the organization. Another huge an advantage of physical boards over ticket systems. And you can achieve the exact same independence with electronic white boards.

But where do the details go?

When I have these discussion in real life, I usually get a couple of questions about the details. Let’s look at two of them.

Q: On a physical board I used to write my acceptance-criteria on the back of the card. I can’t do that with an electronic whiteboard.

A: True, but then again you can put a link on the card on the electronic whiteboard and that can point to almost any place you like. For example a wiki-page that contains that additional information.

Q: But if I use a dedicated bug tracker (the origin of Jira) or any other ticket system I have all those nifty fields for details.

A: But do you need them on the card? Wouldn‘t they better be put on a documentation page?

My general advise here: put only meta-data on the card and all the other information in appropriate systems like a wiki. This also gives you the opportunity to put the information where it belongs in the long run, instead of putting it on the perishable ticket. On the page related to the ticket you can just link to or include that central information.

But what about metrics?

One of the things that gets lost with the ”physical” board is the automated transcription of relevant data for statistics. And I have to admit that this is a real disadvantage. With an electronic whiteboard you could either write a little plugin that tracks the movement of the cards or do a very strict separation of concerns and use different tools for different topics.

A word of caution – writing that little tool for the electronic whiteboard might not be that easy, after all. And even if you were going to do that eventually, it would be a good idea to start by collecting the metrics manually.

Either way: if you start with the metrics that you really need now and create your own tools for those –based on spreadsheets or databases, after all you’re in the software development business— you have a huge advantage over the metrics provided out of the box by many tools: you actually know, what the data means.

And some of the most important metrics are actually easy to evaluate and some of them even easier to capture.

Just give electronic whiteboards a try – if you adhere to the same ideas and first principles that guide your usage of a physical whiteboard you should reap almost all of the same benefits and get a couple of helpful things like links on the cards and enough space for dozens of people to stand in front of the board on top.

till next time
  Michael Mahlberg

Saturday, January 30, 2021

The benefits of continuous blocker clustering

If you manage your work by using some kind of visualization, the chances are high that you also visualize your blockages.

One of the most common visualizations is some kind of task board that represents the subsequent stages work items go through. Assuming you have such a board it can be quite helpful to visually mark which of those work items are currently blocked. This enables the whole team or organization (depending on the scope of your board) to see where work is stuck and to do something about it.

Usually (in the physical world ) these markers had the form of post-it notes in a different color and denoting the reason for the blockage. If you add just a little bit additional information, these blockers can be utilized to identify typical hindrances in the flow. Information you might want to gather are a reference to the original work item this blocker was attached to, the time(stamp) the blockage occurred and the time(stamp) it was solved.

In the Kanban community there is a practice known as “blocker clustering” where all involved parties come together at specific points in time and cluster these blockers according to things that stand out once you try to sort them and categorize them.

Blocker clusters can be either things like “Waiting for department «X»” or “Problems with system «Y»” or something completely different like “discovered missing information from phase «Z»” – that really very much depends on your individual environment. And usually these blocker clusters change over time. And so the should.

Now, here’s an idea: why only do this at certain intervals? Just as pair-programming in software development could also be called continuous code-review, the practice of blocker clustering could be done each time a blocker is resolved.

Granted, this wouldn’t make the big blocker clustering superfluous. After all that is where all concerned parties can decide whether they want to treat the resulting blocker clusters as special case variation –where one-off events caused the blockage– or common cause variation, where the blockage is caused by things that happen “all the time”.

The distinction between these two kinds of variation in the flow is important. One of them, the special case variation, hast to be handled on a case by case basis, whereas the other one is a great opportunity for structural improvements in the way you work.

And this is where continuous blocker clustering really can make a difference. Instead of waiting for the big blocker clustering, people come together and decide into which blocker cluster the blocker goes as soon as it is finished. This doesn’t have to happen in a separate meeting.

After all, the blocker (and the way it got solved) would be announced in the next board walk anyway. Which is also a good place to have this discussion.

And once you do continuous blocker clustering, you can have additional agreements like for example: If there are more than five new blockers in a category you can immediately (or at least very shortly afterwards) come together to discuss whether you want to treat this as a new common cause variation and whether you see a chance to improve your way of working together to address this new common cause. The number five is just an arbitraty number, depending on things like number of people involved, throughput etc. your numbers will differ.

You could also have an agreement to hold such a meeting whenever you have collected five blockers that you couldn’t categorize into a blocker cluster in less than 2 minutes and were therefore grouped under “uncategorized”. (Another working agreement.) The opportunities for demand driven improvements through this approach are vast.

The same basic idea is behind the concept of signal-based Kaizen meetings, that happen whenever specific –agreed upon– circumstances trigger the need for improvement and invoke a spontaneous get-together of the involved parties. Opposed to having only improvement meetings at fixed intervals this makes for much tighter feedback loops and thus enables quicker improvement.

till next time
  Michael Mahlberg

(Special note for people who rely solely on jira: It is a bit hard to implement this is an easy way in jira, but it is possible. And also helpful. But it does include some creative use of fieldvalues, some JQL-Fu and some dedicated Jira-boards. Keep in mind that Jira-Boards are nothing more, and nothing less, than a visualized database query. There’s a lot of power in that, once you start moving beyond the pre-packed solutions.)