Sunday, August 21, 2016

Who likes to be measured? (And what is it you can get out of it?)

Fascinatingly, the issues of time tracking always brings up heated discussions – but is it really time-tracking itself, that is the issue?

Time tracking in the Pomodoro-Technique

In the Pomodoro-Technique – a very simple but efficient approach to time management that used to be freely available – time and behavior tracking are essential. (Unfortunately the free description is no longer easily available –even though you could search for “pomodoro technique cheat sheet”– and the wikipedia entry doesn't reflect on the different ways interruptions are measured and handled in the technique – but the book is well worth a read)

Time tracking in the Personal Software Process (PSP)

The PSP uses time-tracking on a personal basis to improve learning and get to know yourself better. And it also employs behavior tracking.

Time tracking in Sports

Sport – even for the amateur – would be almost unthinkable without tracking.
“How fast did he go?” – “I can‘t tell you, our company policy strictly rules out personal performance tracking” doesn’t make for a great conversation.

And even for people who don't do their sport competitively, measuring their data seems to be important – at least the huge number of tracking devices for speed, steps, heartbeat, cadence) etc. seems to imply that people do want to know how to get better.

When does time tracking fail?

  • When it is used to control people
  • When it is used to distribute budgets - especially after the fact

Time tracking as a way to get better

In my experience people who actually track how they spend their time for their own good tend to get better in a lot of ways.

How you personally use that information depends strongly on context. If it fits your needs, you might use the true handling times of an item to calculate your actual flow efficiency as a team. Or you could use the average time you spend in meetings to convince your boss that you should have less meetings. Some people like to use the delta between their personal estimates and the actual time they spent on the items to improve their estimates – just for themselves, without telling anybody. And if the Situation calls for it you could do something completely different with your data. But very often just having the data enables you to get better a what you're doing.

So – how about giving it a try? (this article was written at a speed of 425 word in 55 minutes, which comes down to an average of 7,7 words per minute)

till next time
  Michael Mahlberg

Sunday, August 07, 2016

What is a “must have”?

Language is important – almost always.

One of my pet peeves is the word “must.“

As in “We must have the feature «x»”, or in “We must do it in this manner.“

From my understanding these are incomplete statements. The missing part is about what happens if we don't.

Complete statements would include the consequences. And thus allow the discussion of the topic. When "We must have the feature «x» to avoid paying a £5 fine” we could easily decide that it is not worth the effort. Whereas if “We must do it in this manner to avoid paying a € 5.000.000 fine” the decision is likely to be different.

So, IMHO a “must” must have explicitly stated consequences in order to make it a helpful addition to a conversation.

till next time
  Michael Mahlberg

Sunday, July 24, 2016

‘Yes’ is always wrong

Words don‘t mean the same thing to everyone. This holds even more so when doing intercultural projects. There have been extensive studies into that – yielding some surprising tools along the way.

Often I found that the simple word ‘Yes’ is a cornerstone of misunderstandings.

And for me that is true not only in intercultural contexts.

While a ‘yes’ to the question "Do you know Oliver Twist?" may mean “I‘ve heard of the novel” to some, it could mean “I've written my Ph.D. thesis on that subject” to others. And somebody else may know a person called Oliver Twist.

Because I've fallen prey to the ’yes-trap’ way too often I nowadays try to clarify my answers each time I catch myself answering ‘yes.’ And I try to avoid questions that can be answered by a simple ‘yes’. Asking “How do you know Oliver Twist?" makes both for a clearer answer as well as for a more interesting conversation.

till next time
  Michael Mahlberg

Sunday, July 10, 2016

Why is there never enough time to do it right?

There is this adage “why is there never enough time to do it right, but always time to do it over?” depicting that in the long run it almost never pays of to try to do something quick and dirty. Once again my friend and esteemed colleague Tom Breur made an excellent point on this effect in his blog post.

The flip side

But there is a flip side to that. While I seriously loathe the “quick and dirty” approach so many people still try to make work I am also weary about “getting it right the first time.“
Because "getting it right the first time" just isn’t in the mindset of the scientific method. Nor PDSA/PDCA. Nor Inspect and adapt. Nor does it cater to the nature of knowledge work.

Can we ever get it right the first time?

Of course it is possible to get things right the first time. For example if we're talking about launching a product. Or if it is about adhering to our code of conduct. And – and that is especially important in my point of view – we can work in what we consider to be the right way. Using the right tools for the job. Sharpening the saw in between felling trees. Just generally being good craftsmen.
Take the iPhone for example. Taking into account that it factually redefined the market it was obviously "done right the first time." Unlike for example the newton, which had a solid follower-ship but just didn't get traction with the mainstream.
Or take electric light, the classic example. More than 1000 experiments until there was a working lightbulb. But when it went public, it seemingly was done “right” from the way cities look at night these days.

So what’s the trick?

I don’t know the trick to getting it right the first time, but in all the efforts I have seen that managed to “get it right the first time” there was a lot of learning through experimenting (which means a lot of failures) before things actually went ’live’.

If you want to get it right...

... remember that in most knowledge-work and especially in software development, it is necessary to do it wrong a couple of times in order to find out what right really means.

...and still be willing to do it over.

Because over time you will learn. And add funny things like Copy&Paste. What they did on the iPhone. After it broke all sales records for smartphones at that time.

So, yes: Take your time to get it right. And please do not expect everything to be right the first time!

till next time
  Michael Mahlberg

Sunday, June 26, 2016

Estimates are bad (and I mean it!)

Recently I wrote about my belief that it is a good thing to estimate. After all thats one of the differentiators that distinguishes us from other forms of life on this planet.

There is a huge number of quotes one can draw from to see the difference:

“Plans are nothing, planning is everything”

-- D. Eisenhower (maybe)

“In preparing for battle I found that Plans are almost useless, but planning is indispensable”

-- perhaps D. Eisenhower or Moltke

“No plan ever survives the first contact with the enemy, but no one survives the first contact with the enemy without a plan”

-- probably Moltke or Clausewitz

It's not so easy to find out who really is the originator of each quote – and I haven't found one yet from Sun Tsu, although I am sure there is one. But the important thing is that all those quotes make a very clear distinction between the act of planning and the actual plan. And there seems to be a common understanding, that the activity is important while the artifact itself –“The Plan”– is very brittle. And has to be adjusted accordingly. Often.

In my experience the same is true for estimates: don't hesitate to adjust them to adapt to a changing situation, but gather enough information and do enough planning to start with estimates that seem plausible to yourself.

Think about harvest planning in the middle ages – having leftovers at the end of winter didn't kill villages, stretching the food for the second half of the winter didn't kill villages. Not knowing how much demand for food there is and living in splendor until Christmas would have been disastrous.

So here the art of the possible would be to do “just enough” estimation.

How much effort would that be? :-)

till next time
  Michael Mahlberg

Estimation is good (and I mean it!)

I am a huge fan of Pavel Brodzinsky's estimation poker cards and the whole “#NoEstimates” movement. But then again I think at last the amount of estimation Pawel suggest is necessary. And as even the long time proponent J.B. Rainsberger said farewell to #NoEstimates I find it helpful to distinguish between “harmful estimation” (my esteemed friend and colleague Tom Breur recently wrote about these) and helpful estimation.

Harmful estimation

Estimation can become harmful for a lot of the reasons pointed out by the followers of the #NoEstimates movements or – for example – by the reasons Tom states in his article I mentioned above.

In my opinion estimation becomes especially harmful when

  • it is used to put pressure on people
  • an effort is made to get “exact estimates”
  • estimates are treated as “exactimates

Helpful estimation

But – and it is a very strong "but" – just because it can be misused (and often is), it doesn't mean that there is no good in estimation.

A little while ago I wrote about planning already – how planning is what made it possible to evolve from hunter-gatherers to modern men.

And for me the same holds true for estimation. If there is no estimation, then there is no way to know whether it makes sense to even invest the effort to work on something. Planning and estimation go hand in hand. Even in "predictable" environments things sometimes only go partially according to plan. We have estimation all the time:

  • estimated time of arrival
  • estimated miles on this tank
  • estimated payout
  • etc.

We couldn't possibly handle our world in all its complexity without “estimating.“ Estimating if the car will fit into the parking spot. Estimating the width of a creek to jump over (or not).

So yes, I think we do need estimates. We just have to make sure, that we know where we need them. And why.

till next time
  Michael Mahlberg

Sunday, June 12, 2016

The difference between a sprint-backlog and a product-backlog

Those who learned about scrum the old fashioned way might call me names for the title of this articles, but since I run into more and more people out there who mingle both terms I think a clarification might be beneficial.

A product backlog is about what you plan to accomplish

“The Product Backlog is an ordered list of everything that might be needed in the product[…]” (Scrumguide 2013)

And it is good practice to keep these things in the bounds of the INVEST properties. This implies that the product-backlog does not prescribe how the requirements are met, but what should be achieved.

A sprint backlog is about what you plan to do

“The Sprint Backlog is the set of Product Backlog items selected for the Sprint, plus a plan for delivering the product Increment and realizing the Sprint Goal. The Sprint Backlog is a forecast by the Development Team about what functionality will be in the next Increment and the work needed to deliver that functionality into a “Done” Increment.“ (ibid, emphasis added)

Big difference!

So yes, sorry mates, if you want to do scrum (and not scrum but) you'll have to do what used to be called ”Sprint planning 2“: Sit down and plan your work.
There are some options to handle things differently – like breaking down the board and “kanbanizing” that part of the workflow – but then these approaches might no longer be exactly what should be called scrum.

Cheers
Michael

Sunday, May 29, 2016

If you can't avoid Jira, at least have an admin on your team

I used to be one of the people in favor of electronizing your task board using jira with the Greenhopper plugin – of course only after starting with a paper version first. Greenhopper has morphed into something called Jira Agile, accompanied by something called Jira Kanban.

And I have to tell you: am very weary of Jira Agile and Jira Kanban.

Especially if there is a central administrator who is in charge of the tool-handling.

Agile processes (and lean approaches like Kanban as well) almost always include a section on how to improve the process – called a retrospective in some processes or an operations review in some others.
This means that the process is meant to be changed. Often. From within the team.

Modeling the team’s process in a tool that is so complex that it makes economic sense to have someone outside the team cater for its administration makes it almost prohibitively harder to change the process.

And a lot of the admins I’ve met are so overburdened by too many projects that they have to optimize. For example by using Jira workflows (i.e. process-definitions) for more than one project. Which might be good, because: reuse!

But then I hear things like “our jira admin doesn‘t allow that” or “I think we‘ll have word on that from our jira admin in a couple of days” or “yes, we could try that – but I’m afraid it might break some of our reports.”

How is a team self-organized when they have to ask for someone else’s permission to change the process? Not so much, in my opinion.
And if they are worried that their process might break, then obviously “working software” isn’t the primary measure of progress any more.
If they have to wait “a couple of days” to find out if they can implement the process changes they came up with in their process improvement meeting, then – I would say – their process isn’t exactly agile any more.

Don’t become that team. Keep control of your process – even if that means you have to administer yet another tool.

till next time
  Michael Mahlberg

Sunday, May 15, 2016

Does PDCA equal “Plan the work, work the plan (and control that)?" – I don't think so!

I don't know how this came about, but a little while ago a friend of mine (whom I highly regard, both personally as well as as a project manager) came up with the notion that PDCA implies planning, having the plan executed (do), checking the results and (re)acting if they are not up to standard.

Maybe I am mistaken and that is, what Deming really means, but the way I understand his text and for example this talk by deming himself it is quite the opposite.

The way I understand it, it is an almost direct implementation of the scientific method:

  • Plan: Formulate a hypothesis and design an experiment for its verification (on a controllable scale, including the definition of an expected outcome)
  • Do: Execute the experiment (in a more or less controlled environment)
  • Check: Verify the results from the experiment with the expected outcome
  • Act: Either implement the changes from the hypothesis or don't! – Depending on the outcome of the experiment.

Am I wrong in my interpretation?

Cheers Michael

Sunday, May 01, 2016

The Return of the Mainframe and the Arrival of Cyberpunk

Back in the days people who wanted to use a computer needed to go to very special places to access those computers.
Actually "access" doesn't quite represent the same concept we have about "accessing a computer" nowadays. Today accessing a computer refers to direct interaction via touchscreen, keyboard, mouse or even voice. Back in "the old days" it meant punching holes in decks of cards (on special machines) and handing them over to so-called operators. Then you had to wait a while; hours at least, if not days, to collect the results after the stack had been processed by the computer.

A little while later time sharing online transaction operating systems were introduced and it became possible to interact directly with the machines. In a way. If you call accessing computer via a terminal, hooked up by a 300 bit per second modem line, capable of displaying 25 rows of 80 characters each "accessing."

This was the landscape of computing when the idea of a "home computer" and later the "personal computer" was born. People where just yearning to explore this world of programming and informatics and just accessing the mainframe on the terms of the owners of said mainframe wasn't giving them the freedom they wanted.

Thus the whole home- and personal-computer universe came into existence.

Because people wanted their own computers. And use them, how they wanted.

Now everybody – given the time, knowledge and still a considerable amount of money – could make their computers do what they wanted.

Let's skip a couple of decades and see the internet (and not only the world wide web) bloom. Created from all the wild experimenting, the un-feasible ideas, the "we'll see if it works", the "I think it should look like this" that individually owned, run, administered and programmed computers brought forward.

One of the biggest success-factors (the 'killer-app') for a long time was e-mail. Electronic mail that was sent from one machine to another over an intelligent network of interconnected servers. A network that found the currently best route from sender to recipient. Computers that delivered those mails based on a very simple standard (RFC 822) independently of the concrete system that was on each side of this connection.

And what happens today?

We get things like Google-Mail and Facebook that run best when messages are sent while you're using their server (a.k.a. distributed mainframe) via a Web-Browser (which is actually just a more sophisticated Terminal than that old 25x80 TTY) on their conditions.

And of course mail is just one example here – office suites that run only "in the brwoser", graphic software with "a web interface" etc. are all following the same trend.

Looks like we have the same old mainframe back in our yards – just with shiny new color and so many bells and whistles that we're (mostly) just lulled into going with the convenience of the solution. And only few people nowadays care about the freedom of their data. And guess what: some of the stuff those people are concerned about, say data security and encryption, are being made illegal – or at least hard to achieve.

For example, owning some tools which allow me to verify my systems integrity can become illegal is becoming illegal in some places nowadays and the development of such "hacker tools" has been made a public offense...

So we live in a time time where average people perform most of their information related tasks using corporation owned computers at the discretion of the corporations while system programmers and developers of safety critical software are bordering on the verge of criminalization – pretty much what cyberpunk authors predicted decades ago.

Just my 2¢...

Cheers Michael

Sunday, April 17, 2016

Scrum is abstract

Over the last decade or so I found that more and more sotware developers struggle with scrum.

Especially since so many people treat agile the way it is described in the half-arsed agile software development manifesto or the dark agile manifesto.

So if I change Scrum (my process) - is it still Scrum (the approach)?

One thing that puzzles a lot of people is the fact that Scrum has to be amended past the 16 pages of it's official definition. The definition itself says so.
Yet on the other hand the battle cry of a huge body of people who call themselves agile professionals is "You are doing Scrum-But."
So where does that leave the teams? Torn between "Inspect and Adapt" "Responding to Change" and "Don't do Scrum-But."

Is it Scrum-But, Scrum-And or something completely different?

Of course it is possible to philosophize about this question a lot, but for the experienced software developer it should be easy to grasp. If you have a background in "clean-code" (as Uncle Bob calls it) or are familiar with the SOLID principles for other reasons the following picture says "it all": (metaphorically speaking)

Scrum as an abstract class

If you look at it as a programming construct Scrum is an abstract class – it defines behavior, data, interactions etc. but some parts are just defined via template methods and stuff has to specified for the concrete implementation.

And for the concrete implementation it makes sense to apply the same rules that make sense when designing object oriented software.

So your subclass should only interact at the designated points without meddling with the innards of the superclass (OCP Open Close Principle). And your implementation should be usable wherever the super-type is usable (LSP Liskov Substitution Principle)

I'm not sure about the other aspects of SOLID, but transferring this thinking for the LSP and the OCP to Scrum implementation – together with the idea that Scrum is an AbstractSuperClass – helps to answer the question whether we're facing a scrum-but or not.

TTFN Michael

Sunday, April 03, 2016

DevOps – Karl Marx to the rescue?

Karl Marx made a very distinct point on the fact that in certain societies the means of production are not in the hands of the workers.

Whether the means of production should be in the hand of the society (or the state) is a question beyond the realms of this blog, but one thing I have come to experience over and over - not having the means of software production (i.e. computers and operating systems) in the hands of the developers is a surefire way to extremely inflated development costs.

In software development the means of production need to be in the hands of the workforce

What is so special about computers and software developers?

Who “owns” the computers in software development?

In my work I have come across numerous organizations of all the different sizes, but one thing most of them have in common – even a lot of the smaller ones – is some specific department responsible for the supply of hardware. Especially computing hardware. In a lot of cases these departments are also responsible to ensure that the computers work.
What could be wrong with that? A dedicated department of highly specialized people taking care of the computers for the rest of us. Well... the problem is in the details of course. Developers are unlike normal, “average” workers. While the biggest part of the workforce uses software, the developers create the software. While the biggest part of the workforce is using the computers as tools, the developers are using them as material (well, and also as tools, but there is a fundamental difference in the approach).
Now the problem is that – more often than not – the department supplying the hardware is treating software developers just the same way they are treating the rest of the workforce. And that in turn leads to dire problems.

So what's the problem?

  • A software developer, developing software that will be installed eventually has to install that software on his own computer numerous times. It is bad when they have to wait for someone for the “IT-Department“ to enter the password each time they try to do that. (real story)
  • A software developer will have to stop running programs on servers from time to time. Is is bad if they have to create tickets in the customer service system to have the "IT-Department" issue a command to stop a process (true story)
  • A software developer has to explore new tools from time to time. It is bad if he has to go through a lengthy clearance process to get the permission to install a trial version. (true story)
  • and so on...

DevOps to the rescue?

What I noticed about the DevOps movement in the first couple of years, DevOps tried to bring exactly this (or rather: the cure to this) to the development teams – the capability to really own their machines and employ their means to the full extent of their education. (After all they are highly trained professionals in terms of computer usage)

I wonder if that is still the focus of the current DevOps movement. I am skeptical, but I strongly hope so.

So... do you own your means of production? Do you want to? What can you do to get there?

till next time
  Michael Mahlberg

Sunday, March 20, 2016

How to find the cards

When teams turn towards kanban as a process control method I often see people standing in front of the board asking “Where the #*$+%+ is the card with task #12345?“

Is that a problem of physical boards?

One could easily argue that this is because it is much harder to find a card on a board than it is by using an electronic search engine.
And I guess that is true.
But that is not the problem here. In my opinion the real problem here is that the board is not yet used as a tool but only as a reporting system.

If you use the board as a tool you never have to search a card

If you use the board as a tool to drive your work it is hard to image a situation where it would be necessary to really search a card. Since you would be working on one, and only one topic at a time you would know exactly where the card you’re working on right now is. And when it’s finished you would move it to the respective «done» column and select a new card on the board according to the station you're working on and the prioritized cards on the board.
In most cases that card would be the topmost card in the «done» sub-column upstream from the one you're going to work in. And once you select this card as your new item to work on, you would move it in the «doing» column of your selected station (e.g. development).
Since this is the card you are working on (perhaps together with somebody else, but in your responsibility) that card won't move on it's own. So you know exactly where it is. So once you're finished with it you would move it to the respective «done» column... (see above)
Rinse and repeat.
No need to search for a card.
Unless you let your work be driven by another system – then of course the question arises “which is the leading system?”

till next time
  Michael Mahlberg

Sunday, March 06, 2016

We don't do sprints any more ...

... we do iterations instead.

A couple of month ago a client of mine started with an effort to work in an agile way – inspired by scrum), as far as possible.

One of the first things we agreed upon was to avoid scrum-speak whenever possible. For example we don't (yet) deliver (potentially) working software to the end-user at every iteration. So we do not call the iteration a sprint.

There are many other things –like not having a truly cross-functional team etc.– that would make it a plain lie to say that we are using scrum in this project, so we don't call it scrum. We don't call the process-coach a scrum master and so on.

The surprising by-product: better communication

The most fascinating thing here for me was the effect the wording had on upper management. "Cancelling a sprint because the sprint-goal is no longer attainable" is somewhat hard to discuss with people outside the agile world without some in-depth discussion of the terms. "It makes no sense to continue with this iteration because the thing we wanted to achieve with it is no longer achievable" is much easier to grasp.
[Language remark: And this was German by the way – for those who know the language "Wir m├╝ssen den Sprint cancelln, weil wir das Sprint-Goal nicht erreichen" is way harder to understand for outsiders than "Wir brechen die Iteration ab, weil das Iterations-Ziel unerreichbar geworden ist"]

There is an adage from Jerry Weinberg wich comes to my mind here:

«If you call the tail of a dog a leg – How many legs does the dog have? Still only four – just calling the tail a leg doesn't make it a leg!»

So, how about you? Do you do Sprints? Do you do Scrum? Do you have a Product Owner? Really?

Why not try an experiment? Instead of using vaguely fitting terms from a process framework, start using terms that describe what you’re doing in “layman's terms” and see what happens.

You might spark a while new conversation.

till next time
  Michael Mahlberg

Sunday, February 21, 2016

Don't neglect software craftsmanship

Even though the software craftsmanship manifesto doesn’t really stand on it's own (it just adds to each of the values from the first page of the agile manifesto) the general points made there are now more true than ever.

In Germany alone for example the number of people writing software doubles every 3 years. That means half of the people writing software in Germany have less than three years experience in doing so.

Three years is exactly the time it takes for a baker in Germany to get through his apprenticeship and become a “B├Ąckergeselle” (a journeyman baker). Same for a carpenter.

In the kind of crafts where there still is an apprenticeship-period those three years are spend honing the relevant craftsmanship-skills – is the same true for software-developers?

How much time do you spend on honing your craftsmanship?

Books like “The pragmatic programmer” or “Clean Code” do a great job in explaining how software craftsmanship can be embodied – but how often do we take the time to follow their advice?

I don't know for sure about other countries, but at least in Germany there is the non-profit organization softwerkskammer which is dedicated to fostering software craftsmanship through activities outside work but focusing of professionalism in the area of software development.

And with an area that moves as fast as ours and with respect to the Japanese mindset of keeping “A Beginners Mind” all of your life, for me it pays off to be active or at least participate in such an organization.

So – have fun with your local chapter of the closest software craftsmanship organization.

till next time
  Michael Mahlberg