Beyond Cards on a Board

Rethinking how we visualize and measure the work of making software

Krishna Kumar
Krishna Kumar

We’ve all seen them wherever software teams work: the whiteboards with sticky notes posted in neat little rows and columns, or more likely these days, the electronic Scrum or Kanban board filled with card-like things in even neater rows and columns.

People sit around quietly at their desks tapping away and every now and then, someone moves a card from one column to the next. Every now and then, there is a high-five when a card reaches the last column. If this was “bring your kids to the office” day, the little ones would not be out of line thinking their mum or dad "made" software simply by moving cards on a board.

Cool for little kids, but unfortunately not so cool when this is the way your engineering analytics tool thinks too.

Everything interesting about making software happens between the time a card enters one column and moves to the next. Yet, almost all current techniques and tools for analyzing software development processes treat cards as the basic primitives and the movement of cards between columns in the board as the basic operation in a software process.

We count cards to measure throughput. We put limits on the number of cards in a column to manage bottlenecks. We measure stats on how long cards took to move between columns to analyze bottlenecks, usually after all the movements are done, and the bottlenecks have moved on as well. We build enterprise software supply chains combining assembly lines of cards, we measure how long things took to do in the past, and use this to forecast when projects will be done in the future.

In short, we analyze software development as though it was a manufacturing process.  But this is absolutely the wrong metaphor, because the core activity in software production is  people making changes to a shared code base. The "cards on a board" metaphor abstracts this key aspect of code production away from clarity. In fact, it causes damage and inefficiencies, and perverse incentives in the way we actually make software.  

Lets see why, with an example.

There is huge value in knowing how long cards took to move between columns, but if you want to know why it took so long to move, or why its been sitting in one place for so long, you  need to have some way of looking at what was happening in between those movements, while people were sitting quietly at their desks tapping away.

The trouble is that the folks looking on from the outside have no visibility into the actual process of making software and without this situational awareness,  we enter the dreaded “negative feedback loop of distraction”.

It goes like this.

  1. The engineering manager wants to know why that card has not moved in two days, so
  2. He walks over to (or emails, or Slacks) the engineer to find out why, and in doing so
  3. Breaks the zen-like state in which she is cranking out code while typing quietly away.

Her answer is probably not going to be of much help to the manager except to make him feel somewhat better (or worse), but in the mean-time he’s probably broken the flow state she was in - setting the card back an extra two hours and we are back at step 1.

In short, the rituals of engineering management end up reducing engineering productivity without necessarily adding value.  

It is how most of us work today.

What does it mean to "Make Software"?

Software development is a collaboration between a diverse group of people with different perspectives -   designers, engineers, product owners, testers, and operations, all involved in building and delivering something that their end users and customers value. But what are they building, and how do they do it?

Our view is that the key activity we need to model and measure about software production is the work needed to take a set of desired changes in the behavior of an application, and make, test and ship changes to a shared code base, for the benefit of users and customers.  

This principal activity of production: the collaborative modification and delivery of changes to a shared code base is not modeled anywhere in the way we analyze software development processes when viewed from a manufacturing process lens. It is abstracted away behind card movements, it is wished away as being unnecessary, or too "complex" to model in detail, it is treated as mysterious black art that only the initiated few are allowed to see and ask questions about. etc. We analyze the deliverables but not the process of delivery.

But this misses much of the nuance in what makes making software different from making a set of similar or identical widgets on an assembly line.

Every card involves different people making different changes with different implementation complexity and risks. Each card introduces changes to the code base and this can potentially impact every other card in the pipeline. The skill level and familiarity of team members with different areas of the code base can dramatically make a difference in how long a change takes or how risky it is. A similar change made at different points in time can have significantly different consequences on the behavior of the application. Yet none of this is modeled in how we measure software development today.

To summarize, the process of  making software is nothing like any manufacturing process in existence. It is what makes software estimation hard, and software projects hard to manage, model and predict.  Our thesis is that this desire to wish away all this complexity, and force software development into a predictable assembly line model is a key reason that attempts to analyze software development using the "cards on a board" metaphor have been spectacularly ineffective.

In order to move beyond these 20th century manufacturing metaphors and bring software development into the 21st century, it is essential to open up the black box of software implementation, and give everyone on a  software development team, engineers, engineering managers, product owners, testers, and executives,  a shared vocabulary that they can use to visualize, analyze and talk  about the actual process of making software.

This is the fundamental premise our approach to Measuring the Work of Making Software. The basic idea is simple: version control systems have a detailed history of how a code base was changed to implement cards. By connecting each card to the actual code changes being made by engineers to implement the requirements in the card, in real time, we’d know how far along the card was in implementation at any time. This allows us to analyze work in progress in detail as it is happening - something that is very hard to do by just looking at cards on a board.  

Mapping Cards to Commits

Not only that, we’d know how the card was implemented: what code changes were made by whom,  when they were made, how long they took, what areas of the code base were affected, and a whole lot more.

As a bonus, the card would tell you why the code changes were made, which provides a tremendous amount of business context around a particular set of code changes, that you cannot analyze by looking at code changes in isolation.

It gives you the tools to understand the fine nuances of how every team builds software, and it is different for every team and potentially every card that is implemented.

When combined with Continuous Measurement,  a technique where we aggregate real time measurements on a low level model mapping cards to code changes, to the state of the overall product pipeline, we not only get a lot of valuable data that engineers can use, but we are also able to build tools on top of this data that give everyone on the team, engineers, tech leads, managers, product owners and executives the ability to visualize the detailed flow of work in engineering in real time.  

We can answer questions going all the way from an engineer asking “who broke my code just now” to the CTO asking “why do features take so long to build”,  or “are engineers working on the right priorities”, or the product owner asking “was the customer value we got for this feature worth the engineering dollars we spent on it” and of course, the engineering manager asking “why hasn't that card moved for the last two days”.

All from a single, consistent model of the world, that is accurate as of the latest commit, card update or pull request.  

A quick video is the best way to get a feel for how this looks. It shows a component called the SpecFlow Chart, a key visual metaphor in Polaris, a Continuous Measurement platform from Exathink Research, that implements these ideas.

The SpecFlow Chart in Polaris

Polaris - A Continuous Measurement Platform

Sounds interesting?  We invite you to take a tour of Polaris,  a Continuous Measurement platform that implements these ideas in depth.

Ergonometrics™,  is a measurement framework powered by Polaris, that is designed to help teams analyze  how they work to deliver software and systematically execute on strategies that improve the ergonomics of value delivery. We help you ship high quality software faster, and manage engineering capacity to effectively and efficiently deliver maximum customer value.

Our approach is process-agnostic and is focused entirely on measuring and improving customer facing outcomes while connecting them to nitty gritty details of changing, testing and shipping code. So you can use it on top of whatever flavor of agile (or even waterfall, if that is still your thing) you may have in place. It can work for teams of all sizes.

Polaris and Ergonometrics™ have been field tested for several years in consulting engagements at exathink.   Check it out at Get Polaris.

Please Subscribe and Share

We'll be sending a lot more updates about these ideas in follow up posts on this blog. So if you think you might be interested in participating in the public beta or simply following along, please subscribe to our mailing list  below.

And if you think there are ideas here that are worth spreading, please share on your social networks. We'd love to get the word out :)

Polaris

Krishna Kumar

Software engineer on a mission to bring data driven decision making into our craft. Its time to leave the 20th century behind.