Sunday, May 29, 2016

If you can't avoid Jira, at least have an admin on your team

I used to be one of the people in favor of electronizing your task board using jira with the Greenhopper plugin – of course only after starting with a paper version first. Greenhopper has morphed into something called Jira Agile, accompanied by something called Jira Kanban.

And I have to tell you: am very weary of Jira Agile and Jira Kanban.

Especially if there is a central administrator who is in charge of the tool-handling.

Agile processes (and lean approaches like Kanban as well) almost always include a section on how to improve the process – called a retrospective in some processes or an operations review in some others.
This means that the process is meant to be changed. Often. From within the team.

Modeling the team’s process in a tool that is so complex that it makes economic sense to have someone outside the team cater for its administration makes it almost prohibitively harder to change the process.

And a lot of the admins I’ve met are so overburdened by too many projects that they have to optimize. For example by using Jira workflows (i.e. process-definitions) for more than one project. Which might be good, because: reuse!

But then I hear things like “our jira admin doesn‘t allow that” or “I think we‘ll have word on that from our jira admin in a couple of days” or “yes, we could try that – but I’m afraid it might break some of our reports.”

How is a team self-organized when they have to ask for someone else’s permission to change the process? Not so much, in my opinion.
And if they are worried that their process might break, then obviously “working software” isn’t the primary measure of progress any more.
If they have to wait “a couple of days” to find out if they can implement the process changes they came up with in their process improvement meeting, then – I would say – their process isn’t exactly agile any more.

Don’t become that team. Keep control of your process – even if that means you have to administer yet another tool.

till next time
  Michael Mahlberg

Sunday, May 15, 2016

Does PDCA equal “Plan the work, work the plan (and control that)?" – I don't think so!

I don't know how this came about, but a little while ago a friend of mine (whom I highly regard, both personally as well as as a project manager) came up with the notion that PDCA implies planning, having the plan executed (do), checking the results and (re)acting if they are not up to standard.

Maybe I am mistaken and that is, what Deming really means, but the way I understand his text and for example this talk by deming himself it is quite the opposite.

The way I understand it, it is an almost direct implementation of the scientific method:

  • Plan: Formulate a hypothesis and design an experiment for its verification (on a controllable scale, including the definition of an expected outcome)
  • Do: Execute the experiment (in a more or less controlled environment)
  • Check: Verify the results from the experiment with the expected outcome
  • Act: Either implement the changes from the hypothesis or don't! – Depending on the outcome of the experiment.

Am I wrong in my interpretation?

Cheers Michael

Sunday, May 01, 2016

The Return of the Mainframe and the Arrival of Cyberpunk

Back in the days people who wanted to use a computer needed to go to very special places to access those computers.
Actually "access" doesn't quite represent the same concept we have about "accessing a computer" nowadays. Today accessing a computer refers to direct interaction via touchscreen, keyboard, mouse or even voice. Back in "the old days" it meant punching holes in decks of cards (on special machines) and handing them over to so-called operators. Then you had to wait a while; hours at least, if not days, to collect the results after the stack had been processed by the computer.

A little while later time sharing online transaction operating systems were introduced and it became possible to interact directly with the machines. In a way. If you call accessing computer via a terminal, hooked up by a 300 bit per second modem line, capable of displaying 25 rows of 80 characters each "accessing."

This was the landscape of computing when the idea of a "home computer" and later the "personal computer" was born. People where just yearning to explore this world of programming and informatics and just accessing the mainframe on the terms of the owners of said mainframe wasn't giving them the freedom they wanted.

Thus the whole home- and personal-computer universe came into existence.

Because people wanted their own computers. And use them, how they wanted.

Now everybody – given the time, knowledge and still a considerable amount of money – could make their computers do what they wanted.

Let's skip a couple of decades and see the internet (and not only the world wide web) bloom. Created from all the wild experimenting, the un-feasible ideas, the "we'll see if it works", the "I think it should look like this" that individually owned, run, administered and programmed computers brought forward.

One of the biggest success-factors (the 'killer-app') for a long time was e-mail. Electronic mail that was sent from one machine to another over an intelligent network of interconnected servers. A network that found the currently best route from sender to recipient. Computers that delivered those mails based on a very simple standard (RFC 822) independently of the concrete system that was on each side of this connection.

And what happens today?

We get things like Google-Mail and Facebook that run best when messages are sent while you're using their server (a.k.a. distributed mainframe) via a Web-Browser (which is actually just a more sophisticated Terminal than that old 25x80 TTY) on their conditions.

And of course mail is just one example here – office suites that run only "in the brwoser", graphic software with "a web interface" etc. are all following the same trend.

Looks like we have the same old mainframe back in our yards – just with shiny new color and so many bells and whistles that we're (mostly) just lulled into going with the convenience of the solution. And only few people nowadays care about the freedom of their data. And guess what: some of the stuff those people are concerned about, say data security and encryption, are being made illegal – or at least hard to achieve.

For example, owning some tools which allow me to verify my systems integrity can become illegal is becoming illegal in some places nowadays and the development of such "hacker tools" has been made a public offense...

So we live in a time time where average people perform most of their information related tasks using corporation owned computers at the discretion of the corporations while system programmers and developers of safety critical software are bordering on the verge of criminalization – pretty much what cyberpunk authors predicted decades ago.

Just my 2¢...

Cheers Michael