Showing posts with label methods. Show all posts
Showing posts with label methods. Show all posts

Sunday, January 30, 2022

There is no Agile Manifesto

Just a little reminder: what many people nowadays think is a way of living or even a way of designing whole organisations was originally something quite different…

What most people call “The Agile Manifesto” actually has a title.

it is called Manifesto for Agile Software Development

And its authors propose the “Twelve Principles of Agile Software.

  • It does not specify a defined approach to continuous improvement – TPS (Toyota Production System) does that, for example
  • It does not elaborate on good ways to optimize lead times – The ToC (Theory of Constraints) does that, for example
  • It does not express any opinion on how a company should be structured in the post-Taylor era – Sociocracy and its derivates do that for example. So does New Work
  • It does not tell anyone how to handle finances without upfront budget plans – Beyond Budgeting does that, for example

And all of the approaches on the right hand side came into existence long before 2001, the year the “Manifesto for Agile Software Development“ was drafted.

If you look a bit further on the original web-page that launched the term “Agile” into the world, you’ll find that in the section “About the Manifesto” as well as in the headline above the twelve principles, it has been called “The Agile Manifesto” even by its authors. Maybe this helps explaining some of the confusion.

Personally, I find it very helpful to remember the context where the whole idea of “Agile” came from – maybe it’s helpful for you, too.

till next time
  Michael Mahlberg

Sunday, October 04, 2015

Scrum Master or Zen Master?

Another Scrum-Master Anti-Pattern

Well – of course there is not one Scrum-Master anti-pattern. There are dozens.

Let's look at a special one today: The Scrum-Master as the Ruler, the Projectmanager, the Sovereign of the project.

To cut it short – Scrum-Masters are not.
Originally weren't meant to be at least.
Maybe sometimes they actually are – but they should not.

It is about the verb ‘master’

Here is what the dictionary on my computer has to say about the verb master:

2 it took ages to master the technique:
learn, become proficient in, know inside out, know (frontward and) backwards; pick up, grasp, understand; informal get the hang of.

If you look at it that way, a scrum master becomes to scrum what a zen master is to zen: someone who has mastered scrum and now is able to help others – at their request - to become more proficient in what they do.

Of course he also has a lot of other duties as well, but the fundamental idea of ‘mastering the process framework‘ in contrast to being the 'Master of the team' is really a big difference in the attitude.

The Scrum Guide explicitly states that ‘The Scrum Master is a servant-leader for the Scrum Team’, perhaps this is worth taking into consideration from time to time.

till next time
  Michael Mahlberg

Sunday, February 22, 2015

Triage may seem cruel, but it could save your product

Triage is a term from the battlefield. To be more specific from the battlefield mobile hospitals.

So triage should not have a place in the – comparatively peaceful – world of software development. Or should it?

Sometimes you can’t get them all

It’s all about the question of how to make the best use of your resources in times when the demand exceeds the resources.

The original idea behind triage was to categorize the wounded into three categories (as described more closely in the wikipedia article on triage)

  • Those who are likely to live, regardless of what care they receive
  • Those who are likely to die, regardless of what care they receive
  • Those who would die if they did not receive immediate care but are likely to live if the do get immediate care

and then concentrate your resources on the category where your effort really makes a difference. In the original case this means to start with the third category – cruel though it may seem for the other two categories.

Over the course of time the rationale and theory behind triage have evolved immensely, as can be seen in the article. Both the ethics and the practical application have been refined, but the basic idea is stimm the same. Don’t waste your energy on “lost causes” when that would lead to loosing causes you have a chance of rescuing.

What’s that got to do with software development?

In our day to day work we’re often confronted with situations where the requirements exceed our capacity.

If we try to do everything, even if we dynamically reschedule according to priority, some things won’t get done. That is exactly what “requirements exceed capacity” means. And this is exactly where triage fit’s into the model. If only 4 of 11 requirements will “survive” make sure not to waste any effort on those, that will never see the light of day anyway. Put them in a separate “folder” and once you’ve got discretionary time on your hands revisit that folder and check which of the requirements would still provide value.

If we concentrate on the 4 most important requirements first – the amount we can expect to complete – we end up wit only 36% (4/11) of the requirements fulfilled, but the features for those four requirements can be shipped.

But it is never a good idea to try to fulfill all 11 requirements with only enough resources for 4 – we would end up with 11 requirements fulfilled to 36%. And because a feature that is only 36% complete in most cases is not shippable we would end up with zero deliverable features.

No, it does not mean you can skip tests or refactoring

So if we have to cut functionality can’t we go faster by skipping unit test, documentation or refactoring? That would be just silly – just as if the doctors in the battlefield hospital would omit washing their hands as Uncle Bob would probably remark. On the contrary. Applying triage on a requirement level should give you the time and space to work at your optimum on those requirements that can survive.

till next time
  Michael Mahlberg

Monday, January 26, 2015

There are some situations where “agile” is the default mode...

Of course there are a lot of situations and places where agile approaches have become the default mode, and looking back at the history of software development with regard to iteratively-incemental approaches that is only a re-discovery anyway.

But in the big – a.k.a. Enterprise-Level – companies it’s mostly not the case. Or only in name but not in action.

Sometimes even “The Enterprise” goes into “agile”-mode

There is a situation when even Enterprise-Level software projects switch to an agile mindset. At least in most things but the name.

All of a sudden certain things start to happen:

  • Business people re-prioritize on a regular basis (in short intervals even)
  • Business people take the time to describe and verify the requirements
  • Rollouts are allowed with very little overhead (sometomes called ‘hot-fixes’ in this context)
  • Developers ask the people from whom the requirements actually came, what they want now
  • etc.

Crisis-mode?

The situation I’m referring to is especially common for in-house-software that is developed for (and often in) large comapnies. Not all – only those projects that come into a crisis-mode phase at the end of the development phase.

I mean the time between the official end of the project and the time the software is actually in use. After the project has been declared finished, but before it is adopted for company wide use. When the last glitches are eliminated – which often takes up way more time than ‘planned’.
This crisis-mode is actually when everybody switches to things that work. And surprisingly these things are an interesting subset of what the agile manifesto mandates. (Of course I’m referring to the second page here)

Unfortunately some practices are not so important in crisis mode. Things like “sustainable pace” and “regular reflections” sometimes don’t seem so important in crisis-mode, but even the “continuous attention to technical excellence” is often more prevalent in crises-mode than in the day-to-day business during the run-time of the project.

So, here‘s an idea: If your enterprise shows the same crisis-mode behaviours, why not use this as a wedge to introduce more agile approaches?

’till next time
  Michael Mahlberg

Sunday, December 28, 2014

Timeboxing and Zeno's paradox

Every now and again I run into arguments about the rigidity of time-box boundaries. Basically it goes like "But perhaps we could have finished what we wanted to do in 2 hours if we just gave it 5 minutes more. _Do we really want to discard 120 minutes worth of work just to save 5 minutes?"

You never have enough time

According to the best known of Zeno’s paradoxes Achilles (who was regarded to be the fastest runner of his time) will never be able to overtake a tortoise with a hundred step head start.
That is exactly the problem with the extension of time-boxes. Even if one would allow a maximum extension of 10% of the original time box to try to “finish it” it would likely still be unsatisfactory in the end.
Like the tortoise in the paradox (give wikipedia a short glance if you haven't already) the time “needed” for completion of the task would be extended an infinite number of times. However, after a couple of extensions, by infinitely small amounts. So in reality and for all practical purposes the timebox would last for 1,11111... times the time that was originally allotted. Which of course is a very specific time. 2 hours and 13 ⅓ Minutes if I am not mistaken.

So the point is definitely not to extend the timebox. It's got to be something diferent.

Parkinsons Law to the rescue?

As Cyril Northcote Parkinson stated in his famous law:

“work expands so as to fill the time available for its completion”

The funny thing is, that the opposite seems to be true as well: if there only is a fixed amount of time, as soon as people realize that it is really fixed, they tend to come up with something usable in that time, effectively applying “design to budget” approaches to things like meetings as well.

And – after people get accustomed to working in timeboxes – the results usually show up shortly before the time is up.

And if they don't sticking to the timebox will help you to plan more realistically the next time around. Just don't fool yourself with 2h timeboxes that tend to last for 2:15 ... ish ...

So – just stick to the timeboxes – use them to your advantage instead of fighting them! (And remember to size them realistically!)

till next time
  Michael Mahlberg

Sunday, November 16, 2014

Sustain! The fifth S of the 5S

Sustain! The fifth S of the 5S

(Shitsuke, 躾, according to Wikipedia)

Whether you look at Hirano or the Wikipedia article on the 5S Approach, the last pillar or practice is the hardest Shitsuke, 躾 which the Wikipedia article translates as Sustain, while Hirano translates it as Discipline.

Let‘s once again have a look at the implementations that are listed in the Wikipedia article:

  • To keep in working order
  • Also translates to "meaning to do without being told"
  • Perform regular audits

These factory related implementations seem to translate quite easily into practices that are also known from agile software development processes or the teachings of clean code development or pragmatic programming, but are they really?

To keep in working order for example can be nicely mapped to practices like continuous integration (the practice, not the tooling) or the "no broken window rule.
Performing regular audits is at the heart of almost every agile method – be it as a retrospective or as a operations review- (as long as you don‘t call it a post-mortem).

But in my opinion and experience this is only part of it. The hardest thing about this pillar is that it is about discipline. About cleaning up even if I already worked late. About sorting things even when there is time pressure. About removing the mess I created while working while the sun is shining and the waves are luring. Agreeing on standards even though everybody seems to do it "almost the same way".
About just really following through on the other four Ss.

And for me this is the most important yet hardest to master of the five "S".

Till next time
  Michael Mahlberg

Sunday, November 02, 2014

Standardize! The fourth S of the 5S

Seiketsu, 清潔, according to Wikipedia

Other parts of this series

Standardize what?

Even though the Wikipedia-Entry refers to this practice as “Standardize!” I prefer – once again – Hirano‘s definition of this technique as “Standardized Cleanup” which makes it somewhat clearer, what the subject of the standardization is.

Wikipedia suggests things like the following for the workplace on the shop floor:

  • Maintain high standards of housekeeping and workplace organization at all times
  • Maintain cleanliness and orderliness
  • Maintain everything in order and according to its standard.

Now, from my point of view, standardized cleanup blends in perfectly with the XP-practice of ubiquitous automation and the current state of software development tools, where it is quite easily possible to actually define standards in such a way, that the compliance with those standards can be enforced or even maintained automatically.

On a coding level there are numerous things to be standardized

  • coding conventions
  • checkin comments
  • build procedures
  • key-bindings (especially if you're doing pair-programming with changing pairs)
  • Concepts to adhere to (e.g. SOLID and things like that)
  • Line-Endings ... (even though that may seem trivial)

And a lot of those standards could be validated by means of the development tools and the source-code management tool (e.g. git-hooks or the hook-mechanisms available in other source code management systems.

But there is also a lot of things you could standardize on other levels...

  • User Story formats
  • Requirements descriptions
  • The quality of acceptance criteria

What else would you standardize?

Till next time
  Michael Mahlberg

Sunday, October 19, 2014

Shine! The third S of the 5S

(Seiso, 清掃, according to Wikipedia)

Other parts of this series

Cleanliness or shine?

According to Hirano the third pillar is called “cleanliness”, a term which doesn't help very much in clarifying the implications for the knowledge-worker or software-development organization.

Let's have another look at the article from Wikipedia.

  • Clean your workplace completely
  • Use cleaning as inspection
  • Prevent machinery and equipment deterioration
  • Keep workplace safe and easy to work
  • Can also be translated as "sweep"

Once again this seems easy – or at least obvious – when the workplace is a workbench, a car pit or any other environment where ‘real’ or physical dirt accumulates. But how do you attain cleanliness at the workplace of a knowledge-worker?

In my opinion, when your knowledge work involves computers, the sweeping might include:

  • Checking the local working copy of your source code control system for orphaned files
  • Removing temporary files
  • Removing unused build and configuration files
  • Deleting invalid contacts and obsolete phone numbers or addresses
  • Or even such mundane tasks as running anti-virus software regularly
  • Keeping you synced folders (e.g. Dropbox) synced
  • Keeping Backups
  • Removing unused branches in the source code control system

If your work also includes actual creation of code there usually is a lot of cleaning up to do at the end of a coding session. That cleaning up could include (but is not limited to) things like

  • Removing duplications
  • Removing experiments
  • Removing trace and debug statements that are no longer needed
  • Adding trace and debug statements for maintenance purposes

Even apart from work directly related to computers there is a lot of ‘sweeping’ possible:

  • Re-evaluting your planned work (e.g. backlog grooming in many scrum-inspired environments) – weed out the stuff you don't need anymore
  • Removing old versions of documents
  • Removing outdated links from the documentation (e.g. Wiki-pages)

And so on – just get rid of stuff that doesn't add value any more or is outdated. Having superfluous ‘things’ usually confuses people more than it helps.

What are your suggestions for sweeping the workplace of knowledge-workers?

Till next time
  Michael Mahlberg

Sunday, October 05, 2014

Straighten! The second S of the 5S

(Seiton, 整頓, according to Wikipedia)

I am still not convinced that it was a good idea to only use English words that start with an ‘s’ for all the pillars of the 5S-System in the Wikipedia (and some other) explanation of the concept.
According to Hirano, who wrote one of the defining books on 5S, the second pillar is called ‘orderliness’ which – in my opinion – is much easier to interpret for software development purposes.

Ideas from production (as quoted from Wikipedia)

  • Arrange all necessary items in order so they can be easily picked for use
  • Prevent loss and waste of time
  • Make it easy to find and pick up necessary items
  • Ensure first-come-first-serve basis
  • Make (the) workflow smooth and easy
  • Can also be translated as “set in order”

The difference between ‘sort’ and ‘straighten’ is very subtle - especially when we think about software-development or other knowledge work, but if we consider the alternative translations ‘organization’ and ‘orderliness’, the difference becomes much clearer in my opinion.

How to apply these ideas to software development

While ‘organization’ calls for the removal of unnecessary clutter (be it in your File-System, on your physical desktop, on your computer’s desktop or anywhere else) ‘orderliness’ goes a step further and requires us to set the things that are not unnecessary – one might say those items that are necessary – in a definitive, understandable, reproducible order.

Let‘s look at other options to bring more orderliness into software-development

One of the things I tend to see here is the “automate ruthlessly“ or “ubiquitous automation” concept. Or, as they put it in the old days:

  • The first time you do something, you just do it manually.
  • The second time you do something similar, you wince at the repetition, but you do it anyway.
  • The third time you do something similar, you automate.

But just using the tools of the trade in a more orderly fashion can make a huge difference. Using tags to categorize files (if your file-system supports such a thing), using a defined pattern for file names (not only for source code) and generally not only weeding out stuff but also ordering your tools and material falls into this category.

As James O. Coplien quotes in the foreword to the clean code book there is the old American (?) saying of “A place for everything and everything in its place” which really captures the whole concept very well for me.

What I propose in addition to Cope‘s explanation of this concept (a piece of code should be where you expect to find it) is to apply this idea to everything related to the value chain – from the first idea to the end-user actually interacting with the capability of the system that represents this idea.

  • Where do the requirements belong?
  • Where do the acceptance criteria live?
  • Where would I find the swahili language translation of the help-files
  • Where is machine specific configuration information placed? And how about user specific configuration?
  • and so on...

Now what would you propose to do in our day-to-day work to get our software-development more ‘orderly’?

Till next time
  Michael Mahlberg

Sunday, September 21, 2014

Sort! The first S of the 5S ...

...(Seiri, 整理, according to Wikipedia)

When applying the 5S-Approach to software development it is important to not just take the Wikipedia definition verbatim, but to also look behind the scenes.

So what does "sort" mean in software development?

First of all – it is not "sort". [Hirano][hirano-95], who wrote one of the defining books on 5S, describes this pillar as "organization" - the verb, not the noun.

Ideas from production (quoted from Wikipedia)

  • Remove unnecessary items and dispose of them properly
  • Make work easier by eliminating obstacles
  • Reduce chance of being disturbed with unnecessary items
  • Prevent accumulation of unnecessary items
  • Evaluate necessary items with regard to debt/cost/other factors.

When you think about it, this is very close to "decluttering your life" – but with a focus on the workplace. (you might want to look up “100 items or less”)

How to apply these ideas to software development

Does “organize” mean you have to have a clean desktop? Either the one on your computer or the one your keyboards is placed upon?
Does “organize” imply you should not have any personal items on your desk or walls?
Does “organize“ require you to not have old printouts of code on your desk?
No, No and... Yes! Actually it does mean that you don't have any old, obsolete printouts on your desk. This is where things are quite similar between the workplace in a factory and a workplace in knowledge-work – don't put too many things you don‘t actually need in your workplace. Neither in the physical workplace nor in the virtual workplace on your computer

  • Are you constantly clicking on the same buttons? Buttons which don't actually add any value to your work? Eliminate those clicks.
  • Is your computer‘s desktop cluttered with old shortcuts? Remove them! Or move them to a special folder where they don't interfere with the day-to-day work.
  • Do you have all of the Microsoft products installed but only ever use one of them? Sort at least the icons so that the unused ones are out of the way.

Take the time to organize your personal workplace – it pays of in spades.

The same holds on the product level:

  • Do you have hundreds of files, that don't serve any purpose any more? Just delete them! If you're not sure if it is safe to delete them this might be a good time to take a good look at your source-code management system...
  • Do you have local copies of old versions of your source tree, so that you can look up certain things? Once again a good option to familiarize yourself with the source-code management system of your choice. And then delete those copies. (And while you‘re at it you might want to have a look at git to get some more leeway with respect to source-code management)
  • Do you use google to look up how the functions of your programming-language, libraries and frameworks work? Try thinking about compiling the relevant information and making it accessible locally to avoid things like google driven architecture (German article).
  • Do you have dozens of auxiliary (self-made) framworks and libraries? Try combining them while weeding out the unused and obsolete code.

I guess you get the drift – organizing your work in the software world can be tremendously helpful and certainly is a good starting point on the way to a streamlined lean and agile software development process, but of course it is not the only thing that’s necessary. But then again it is called ‘5S’, so there is more to come.

Till next time
  Michael Mahlberg

Sunday, September 07, 2014

Gradually changing a ‘system’ (team, company, corporation etc.) – give the five ‘S’s (5S) a try

I outlined earlier, that I do not believe in the Nuremberg Funnel or any other direct way to instill values in peoples heads.

But if there is no "Upload Values" routine in the system, what are the chances to change team and company behavior?

The Five-S approach

Amongst other things the 5S approach has been used for a long time in conjunction with lean production to introduce the lean mindset by applying practices.

The term 5S comes from 5 Japanese words that happen to have fitting English translations which also start with S. As the Wikipedia article states, these words are

While the Wikipedia article explains (a little bit on) how to apply these "phases" as they are called in the article, there is more to these concepts. In other works (e.g. Hirano's "5 Pillars of the Visual Workplace" they are called pillars which fits the original idea more closely.

Unfortunately these ideas are very close to the problem domain from which they where born – which is manufacturing in this case.

Like Kanban, which hast been re-applied to software-development and a lot of other types of knowledge work by David J. Anderson, the 5S approach also needs to be re-applied to the field of software-development to make it an effective tool for this kind of environment.

So I'll look into the concrete projections of the 5S for a software developing company some over the course of the next 5 posts.

Till next time
  Michael Mahlberg

P.S.: Of course there are other approaches to changing a companies mindset – some even complementing the 5S approach like the Toyota Kata, as described by Mike Rother in his book –, but the 5S System gives very good guidance on an appropriate level of abstraction IMHO.

Sunday, July 27, 2014

After the fact - a new role for function points?

Just the other day I was chatting with a friend about the place of function points in software development.

While they are traditionally used as an approach to estimate the effort required to build a system, from my point of view this role has changed with the current prevalence of lean and agile methods.

Using function points to estimate effort in new project work

... is (IMHO) a difficult feat because there would be a serious amount of functional decomposition necessary which in turn would require extensive analysis which in itself would be a serious step towards BDUF. Furthermore it would require so much effort that a separate project would be necessary to get the funding for the work.

And this approach is neither very agile nor very lean. It does not address the knowledge gain – both about what the project is about and on how to go about the solution – during the project.

Making work between projects comparable with function points

... on the other hand seems quite feasible to me. Usually, after we have finished the work (and of course in an agile environment we have finished, really finished, at least some work after the first iteration) we do have measurable building blocks that can easily be measured and counted (in functionpoints).

Using function points to plan big projects

... is not such a good idea from my point of view. (Even when it is considered viable because epics seem too hard to plan with planning poker)
In my opinion using function point analyses for up-front planning is almost dangerous – for the aforementioned reasons of extensive up-front work (and implicitly commitment to solutions).
If estimating epics seems too hard there might be other reasons involved that would still be valid if function point analysis would be used. But with the kind of up-front analysis that often seems appropriate for function point analysis these points might become hidden behind too much detail. The problem with planning poker is of course that the "consensus amongst expert" that has been derived from wideband delphi depends on a certain level of detail and upon a sufficient number of available experts from the different areas of expertise.

In the end, all that planning poker does is condensing the formal approach of wideband delphi into a seemingly more informal approach based on verbal communication. Establishing a basis for estimation and installing a cross-functional group of experts is still necessary! Even if the process that can take weeks in wideband delphi is condensed to a relatively short interactive meeting. Such a group could – in a software development setting – consist of e.g. marketing, software-architects, database engineers, ux-specialists, testers, quality assurance, technical writers, and so on

If the requirements can‘t be estimated well enough, that problem is often rooted in too little experience in the domain, or missing decomposition into manageable – and understandable – units, for example stories on the next (more concrete) level of abstraction.
While function point analysis also enforces the decomposition of the requirements, it tends to drive the analysis towards a mindset of "What can be counted in function point analysis?" instead of a mindset of "What is a capability of the system that can actually be leveraged by an end-user and that I can estimate?" Therefore there is a genuine risk of trying to operate in the solution space before even the problem space has been explored well enough.

So, instead of opting for function point analysis when epics seem un-estimatable, I would rather suggest to break the epics down in such a form that a solid comparison with things that have been done before is possible. One approach to do this might be to at least name the stories on the next less abstract level. And additionally walk through a couple of user journeys.

Using planning poker to plan small increments of existing software

... on the other hand is a surprisingly good idea in my book.

The questions that have to be answered to get to the function point revolve around things like:

  • How many (already existing!) screens have to be modified and how complex are they?
  • How many tables are involved?
    (The data model and its physical representation usually also exist with existing, running software)
  • How many interface have to be touched? Are they inbound or outbound?
    Remember: The system is running already, so the interfaces are either already in place or an explicit part of the requirement.
  • How many functional blocks of what complexity are affected?

All of these issues are cleanly cut when adding small, well-defined requirements to an already existing system and thus can be counted quite easily. When implementing completely new epics, trying to put numbers to these issues requires at least the creation (a.k.a. design) of a conceptual data model and a functional decomposition of the requirements – things you would rather like to do during the discovery of the system, during and alongside the implementation.

My conclusion:
Function points can be really ugly critters – but used to the right ends they can be a tremendously efficient means.

'til next time
  Michael Mahlberg

Sunday, July 13, 2014

Testing: How to get the data into the system

Even though the correct term for a lot of the “testing” going on would be verification let‘s just stick with “testing” in the titles for the time being...

General verification workflow

The general way to verify that a piece of software does what it is meant to do seems quite simple:

  • Formulate the desired outcome for a defined series of actions
  • Put the system in a known state (or the sub-system or the “unit” – depending on your testing goal)
  • Execute the aforementioned defined actions
  • Verify that that the desired outcome is actually achieved
  • [Optional] Clean up the systems [1]

While this process sounds simple enough, there are enough pitfalls hidden in these few steps to have spawned a whole industry and produce dozens of books.

In this post I want to tackle a very specific aspect – the part where the system is put into a “known state”.

Putting the system into a known state might involve several – more or less complex – actions. Nowadays, where it's possible to automate and orchestrate the whole creation and setup of machines with tools like vagrant and puppet it is even possible to set up the whole environment programmatically.

You might not want to that for each unit test, which brings us to the question of when to setup what wich I will try to address in some future post.

The problem with the data

However big or small the test-setup is, one thing that is very hard to avoid is providing data.

The state of the system (including data) if often called a fixture and having those fixtures – known states of the system with reliable, known data – is a fundamental prerequisite for any kind of serious testing - may it be manually or automated.

For any system of significant size if there are no fixtures, there is no way to tell if the system behaves as desired.

Getting the data into the system: Some options

In general there are three ways to get the data into the system

  • Save a known state of the data and import it into the system before the tests are run.
    In this scenario the important question is “which part of the data do I load at which time“ because the tests might of course interfere with each other and probably mess up the data – especially if they fail. Consider using this approach only in conjunction with proper setups before each test, amended by assertions and backed up by “on the fly” data-generation where necessary.
  • Create the data on the fly via the means of the system.
    Typically for acceptance tests this means UI-interaction – probably not the way you want to go if you have to run hundreds of tests. Consider implementing an interface, that can be accessed programmatically from outside the system, that uses the same internal mechanisms for data creation as the rest of the software.
  • Create the data on the fly directly (via the datastore layer).
    This approach has the tempting property that it can be extremely fast and can be implementing without designing the system under test specifically for testability. The huge problem with this approach is that it duplicates knowledge (or assumptions) about the systems internal structures and concepts – a thing that we usually try to avoid. Consider just not using this approach!

So, do you actually have fixtures? And how do you get to your data?

’til next time
  Michael Mahlberg


[1]

(One can either put the effort in after the test or in the setup of the test - or split the effort between the two places, but the effort to make sure that the system is in the correct state always has to go into the setup. Cleaning up after the test can help a lot in terms of performance and ramp-up time, but it can not serve a substitute for a thorough setup.

Sunday, April 06, 2014

Some models don’t need to show off…

Bubbles don’t crash – or so they say.

As most of us know, this doesn’t apply to stock-market bubbles. Or housing bubbles. This adage – “Bubbles don’t crash” – is targeted to a kind of bubble that’s specific to the software world.

The argument that “bubbles don’t crash” refers to the ‘bubbles’ that sometimes are used when modeling system behavior – be it informally on a white-board of in a tool. It’s just another way of asking the wiscy-question: Why Isn’t Somebody Coding Yet. Both adages show quite clearly that not everybody sees a huge value in extensive modeling.

Even though my own enthusiasm for modeling everything has some very clear boundaries I do advocate building (visual) models as a means of communication, as a way to verify assumptions and for whole lot of other reasons. (And please use a standardized notation and put legends on everything that goes beyond that notation if you want to use at some point in the future. Like: in an hour from the moment you create it.)

So, yes, I do think that it’s a good idea to stop drawing pictures at some point and start putting things in more concrete representations, but what I don’t understand is why some people shy away from everything that is called a model with a rendition of ”Bubbles don’t crash“ on their lips.

The majority of models we encounter are much more than only a picture – the formula p * v = const for example is a model of the behavior of gas under pressure. It means that with twice as much pressure an ideal gas will have half the volume. This is called the “Boyle–Mariotte law” and one of the first models every Scuba diver has to learn. Because it also means that with half the pressure the volume will be twice as much. Which can have serious consequences if the gas is the air in your lungs and you are not aware of this model.

Of course in reality this model is not the way the gas behaves – there are numerous other factors (like the temperature for example) that also have an impact, but for the purpose of the context the model is good enough – and not graphic at all.

And there are a lot more models like this. The so-called velocity in Scrum is one for example. Just to get back to software development. And so is Little’s law, famed in the Kanban community.

Another “model” that we come across very often is the []state-machine]state_machine - known to some from petri-nets to others from the bare theory of information system and yet to others from the UML-state diagram. A lot of ‘cybernetics’ is actually done by state-machines and in many programming environments modeling behavior through state-machines is so ubiquitous that we don‘t even notice it any more. Actually every time someone implements the ‘state’ pattern from the Gang of four pattern book they build of model of desired behavior – even though they do not implement a state-machine (but that would be a topic for another post).

And even if it is not about programming, but about the process around it – building a model is quite helpful and makes it possible to verify your assumptions. You think you can complete twice as many features with twice as many people? The model for that could be features_completed = number_of_team_members * time. And that model can be verified very easily. (Or – as I would predict in this case, according to Fred Brooks‘ seminal book The Mythical Man Month: falsified…)

So, from my point of view, embracing models and the idea of modeling is quite helpful – even if most models are not visible.

’till next time
  Michael

Sunday, March 23, 2014

In Kanban the kanban is not the kanban - What?!?

In Kanban the kanban is not the kanban - What?!?

In the early stages of the introduction of Kanban systems many organizations struggle with the implementation of the pull signal and how the cards represent the pull signal.
In my experience a lot of this confusion is caused by semantic diffusion and the fact that “The Kanban Method” (for software development and other knowledge-work) often gets over-simplified and is reduced to a lax translation of the word kanban and the original 3 rules for a Kanban (capital K) system.

Basics

Let’s look a bit deeper into this
As David Anderson points out in his famous blue book the word kanban means (more or less) «signal card» and such a card is used to visualize a so called pull request in a traditional kanban environment.

Now there is a lot of information put into one little sentence. What is a traditional kanban system anyway? What is a pull request? And what’s different in our Kanban (capital K) systems?

A “traditional” kanban system is the kind of kanban system that has been in use at the production plants of Toyota and the like to optimize the flow of physical work products through the factory. Here the upstream station – that is, any station that lies before another station in the value stream – gives out kanbans which represent their capacity. These kanbans act as tokens for the downstream stations to request new material – or, to pull the material.

But what is different in “our” Kanban systems? Well, the reaon for the capital K is the fact that we’re working with a different kind of representation in “The Kanban Method” (for software development and other knowledge-work). On page 13 of the aforementioned book David points out that

«… these cards do not actually function as signals to pull more work. Instead, they represent work items. Hence the term ‘virtual’» (emphasis mine)

Virtual pull signals

So what about the pull signal in this scenario? Isn’t it all about establishing a pull system? Well, it’s covered in work. Literally. Almost. But at least it can be covered by work as the following illustration shows.

A very simple board

A kanban board in use

Some work moved

A kanban board in use

More work moved

A kanban board in use
As you can see: you can’t see any pull signal - only the work.

That’s because the pull-signal is actually hidden behind the work and not really visible. At least not in this rendition. It is possible to make it visible, but only for very simple boards. All that’s needed here is a little annotation.

A very simple board with annotation

A kanban board annotated with pull signals
A kanban board annotated with pull signals…

Board filled with work

An annotated board in use step-1
The same Kanban board in use – all the pull signals hidden by the work. Looks quite similar to the non-annotated board, doesn’t it?

Some work moved into production

An annotated board in use step-2
So now, when the cards are moved, the pull-requests become real visual signs.

Work getting pulled all over the board

An annotated board in use step-3
And when the pull-request are fulfilled, that in turn reveals more pull requests and so on.

A more complex board

Actually most evolved Kanban board contain at least some queue-colums - often differentiating between “doing” and “done.” Now the annotation approach doesn’t work any more because the pull signal becomes completely virtual.

Let’s have a look at this as well.

The same work on a more elaborate board

Board with explicit “done” columns
Work in progess shows up in the doing colums of course

Some work is done

Board with explicit “done“ columns after some work is done
Even though some cards are moved around, no WIP-Limits are broken and no pull request issued (WIP-Limits in this example go accros doing and done)

Invisible pull signal

A pull signal is implied but not visible yet
Now that a work-item has left its WIP-boundaries a pull request is implied - but not at all visible.

Virtual pull request

The pull signal in Numbers
In fact the pull-request is only ‘visible’ by comparing the actual Work-In-Progress – in this case 2 – with the WIP-Limit, which is (3) in this example. Hence the pull request can be calculated but is not visible to the naked eye. Which fits in nicely with the notion of a “virtually signalled pull request”. This can be translated to “virtual kanban”. And of course virtual kanbans live on ”virtual kanban boards” in “virtual kanban systems”.

’till next time
  Michael

Sunday, March 09, 2014

Don't be too SMART - Goals, Targets and Lighthouses

The idea of SMART goals has such appeal to many people, that they try to put everything in these terms. And I have to admit that I'm a big fan of the SMART concept myself.

Having goals that are:

  • S pecific
  • M easurable
  • A ctionable
  • R ealistic
  • T imed

is very helpful when I try to decide whether to start a certain task or not. Whenever I hold an operations review or a retrospective I remind people to think about the SMART acronym whenever they they refer to actions.

As an example from the software development world “We should clean up our code” is not a very SMART goal if you look at it. “We want to reduce the function size of every function to less than twenty five lines by applying ‘extract method’ until next month” may not speak very well to the non-initiated, but it surely is SMART.

Sometimes I may overshoot in this quest for clarity. Not all goals have to be perfectly SMART. Especially with long term goals it is sometimes a good idea to aim for a “goal” that is not really reachable but that can show the way nonetheless. Some goals are targets that you want to hit, for some goals you want to pass between the goalposts (or over the finish-line for that matter).

Some goals really should be treated more like lighthouses by fishermen. You want to move towards them when it's appropriate but you can never reach them and probably don't even know their specifics – but they still help you find your way. (Besides: when your in a seagoing vessel and do reach them bad things happen, but that may be pushing the analogy to far)

So the picture in my mind has changed over the years and nowadays I try to use the SMART concept whenever I deem it appropriate, but I also try to find enough lighthouses on the way.

TTFN
   Michael

Sunday, January 26, 2014

Busy Products - Idle Workers

Once you start investigating workflows from the point of view of the work-items instead of from the workers perspective interesting things start to show. One tool to do this is the "value stream analysis" – one of the tools in the lean approach.

One of the fascinating things that came up again when Tom did that in a simulation at the Agile BI-Conference in December '13 was this same fact, that is often the root-cause of a certain kind of workplace unhappiness: the difference between the idle-time of the person doing the job (nearly no idle time at all) and the idle-time of the ‘item under construction’ – or product – which might easily approach or even exceed 80%.

If we take one requirement as the work item and map out its way through two weeks of development in a simple two-state graph we see that there are only small peaks of work while the work-item itself is idle most of the time.

The workers on the other hand – who try to fit as many requirements as possible in their time-box – are always in a busy state!

So, if it takes forty days to deliver a net worth of one workday it is no wonder that perceptions of workload might differ 'a bit' depending on your vantage point.

After all: however busy I may feel, as soon as I try to do five things in parallel, this also means that whenever I work on one of them, four of them must lie around idling. Totaling an average of 80% idle-time per Item. When I think about this it makes me want to introduce more measures to limit my work in progress every time!

So, have a good time mapping out the value-streams of the work-items that are most important to you – you never know what you might find out.

Cheers,
   Michael

Sunday, January 12, 2014

Not all simulations scale

I really like simulations as a way to introduce engineering practices. According to the old proverb I hear and I forget; I see and I remember; I do and I understand, there is hardly a better way to teach the concepts and mechanics of an approach than by actually living through it.

But some parts of simulations can be extremely misleading. Some things scale down very nicely other not at all. Even in physics it's not possible to really scale down everything - that's why wind-tunnels can't always be operated with normal air but need special measures to achieve a realistic environment.

But back to simulations in the field of knowledge-work...
I ran the getkanban simulation (v2) a couple of times now and found that it does a very good job of scaling down the mechanics and at the same time illustrating some of the concepts in a ver tangible manner. Except for the retrospectives or operations reviews.
With the Kanban Pizza Game the effect was even stronger. When we ran it at the Limited WIP Society Cologne(German site) we really liked the way it emphasized the tremendous effect that can come from limiting the work in progress and other aspects of the Kanban Method - except for the retrospectives.
With 5 Minutes for a retrospectives and given the fact that speedinguptheconversationdoesntreallywork (speeding up the conversation doesn't really work) it is hard to hear everyones voice in a retrospective. And of course – as Tom DeMarco points out in "The Deadline" – people also can't really speed up their thinking. It takes a certain amount of time to process information.
What's more: Scaling down retrospectives or operation reviews this much gives people who never experienced a real retrospective a wrong impression – and totally contradicts the concept of Nemawashi!

And this is true for most of the aspects that involve human interaction – root cause analysis, value stream mapping, A3-reporting, designing Kanban systems (as opposed to using them) etc. This is one of the reasons Tom and I designed the Hands-on Agile and Lean Practices workshop as a series of simulations alternating with real-time interjections of the real thing (e.g. a 30 minute value-stream mapping inside a 20 minute simulation, so that people really can experience the thought-process and necessary discussions).

Nowadays I try to balance my simulations in such a way that the systemic part of an aspect is displayed and emphasized through the simulation while the human aspects are given enough space to be a realistic version of the real thing.

What do you think?

Cheers
  Michael

Sunday, September 15, 2013

The Spice Girls on requirements engineering and root cause analysis

Just the other week I came across an interesting thread on the "kanbandev" mailing-list. It was all about "have you asked the Spice Girls question?" which seems somewhat odd, if you consider that this mailing-list is about managing Software development projects...
turns out it is only about the second verse of "wannabe" : "So, tell me what you want, what you really, really want" [you can (and should) ignore the rest of the song for the purpose of this post].
That one sentence actually captures remarkably well what differentiates a shallow requirements or root cause analysis from a thorough one: Asking beyond the obvious to find out the need that fuels the obvious.
according to Jabe Bloom Steven Bungay formulated the question a bit more formally:

[...] This should be encapsulated succinctly in the form... We should do What in order that Why.
For example... We should implement Continuous Delivery in order to minimize Lead time.

But "Tell me [...] what you really, really want" seems to stick a lot better than just "why".
So thanks, Jabe Bloom, for raining this!

Sunday, September 01, 2013

Post-mortems
are for dead projects - what to do with the living?

"He's dead, Jim" – the famous quote from "Bones" McCoy in the original Star Trek may be one of the shortest post-mortems one can think of, but it's not necessarily what you want in an Agile retrospective. What you might want instead is the attitude of Star Trek: the Next Generation's captain Picard - "Make it so".

From post-mortem to retrospective

But then again: the history of retrospectives is a history of misunderstandings...
One – plausible – version I have heard goes like this:
When the idea of project retrospectives came up in the context of early space missions, the idea was in fact to learn for the next iteration. But "the next iteration" would have been the next iteration of a rocket design, and the previous iteration would now have been reduced to a lump of metal shreds after it's successful test flight. Under these conditions it must have seemed quite fitting to borrow the term post-mortem – after all there wasn't much left moving at the "landing site". Developing safe ways of landing was not quite as high on the list of priorities as getting off the ground in the first place.
Gradually the scope of post-mortems expanded and soon the term was used to describe the process for all kinds of projects and was even adopted as a general term in commercial software development.
In 2001 Norman L. Kerth wrote a book titled Project Retrospectives: A Handbook for Team Reviews in which he made a strong case against the use of the, then accepted, term post-mortem. Before Esther Derby Diana Larsen's Book Agile Retrospectives came out, this was the definitive guide on retrospectives for me (now both books share that place) and I always tried to follow his emphasis of the fact that we don't do retrospectives for the past but for the future.

So let's use the term "Retrospective"?

At least there is a somewhat common understanding of the term.
When Tom and I laid out the structure for the Hands-on lean and Agile practices course we decided to go with the term "retrospective" – partially because that term is more widely recognized and people would have a clearer picture of what we talk about in that part, and partially because "retrospective" is the term commonly used in descriptions of the mechanics of such a review which makes it a good search candidate when looking for additional information.

How about "operations reviews"?

Nowadays I'm contemplating to suggest naming this section of the course "operations review" – a term used in the lean and Kanban literature. Although there are many very tight definitions of that term pointing out that an operations review is not at all comparable to a retrospective, these "clear" definitions are contradictory and some (a lot) actually suggest a certain similarity between operations review and (healthy) retrospectives.

Now what's it with this nitpicking with words?

Is it really important how we call this activity in our daily work? If we start with the intention in mind – looking forward, wanting to influence the future – won't the rest follow?
Well, modern brain science has it's own take on this.
As humans, our expectations and actions are influenced by the language used. In one very well known experiment people where told that they where tested on word-understanding while in reality their physical behavior was the subject of the experiment. One group had to work through a series of words with connotation of old age while the other group worked with words implying agility and youth. Sure enough the "old word" group was measurably slower on their way out from the testing facility. Even though this specific experiment on `priming´ has been disputed lately, the effects of priming also are one of the things the whole advertisement industry thrives upon. One last argument I would like to quote for this influence of words are the Implicit Association Tests from the harvard university. In these tests unconscious beliefs can be discovered by measuring the time it takes to identified words after the brain has been 'primed' with simple but powerful concepts like 'good' and 'bad'.
I don't know about you, but if I have the chance to 'prime' the whole team conducting the activity, I would rather like prime them with a concept that directs our attention to the future than setting them up for looking at a project that's still running with the mindset of a retro-spective, "looking back from a distance". Although much closer to "being in the present" than the mindset of a post-mortem, it still sets us apart from the things we want to influence. For my ears an "operations review" sounds much more like a thing I would undertake to influence what I'm doing right now.
I'm open for suggestions, but whenever possible I propose that we use words like "operations review" instead of "retrospective" or "post mortem" – at least until we find an even better way to put it.