Being an engineering leader at a software company can often feel like being the coach of a professional sports team where you and the coaching staff are asked to put on blindfolds before the start of the game, and are only allowed to take them off every so often, and then only to look at the scoreboard. Can you effectively lead a team when you cannot see how the game is played?
If you are not shipping releases reliably, do you know why? If your releases are buggy, do you know why? Do you know if your engineering team is actually working on things that are the top priorities for your business? If not, do you know how to change course? Do you know how long it actually takes to build and deliver the features your customers value the most? Do you know why they take as long as they do? Do you know whether adding more engineers will help you meet your goals, or is it something else that is holding your current team back? When you grow your team, are they actually adding more customer value?
Every other function in a software company, be it sales, marketing, product or operations, has quantitative tools to understand how a team operates, uses this data to improve how they work to deliver value to customers. But not engineering. It is almost an article of faith in the software industry, that questions like this are too hard to answer with any precision or rigor, and that it is almost futile to try.
As engineering leaders, we acknowledge and settle for the sub-optimal. We make plans with educated guesses and manage them by gut feel, elaborate status reporting and rituals. We measure activity and busyness using metrics that have a veneer of precision but little or no diagnostic value. We put our faith in development methodologies and equate performance with following their prescribed rituals. We rinse and repeat, and continue to live with massive inefficiencies and wasted effort.
As someone who has moved back and forth between hands-on development and leadership roles for over 25 years, I’ve always been struck by the disconnect between how different our work looks from the “inside” as an engineer, and from the “outside”, when I’ve been in leadership or advisory roles.
On the inside, we talk about what I consider the real work of "making software": the logistics of collaborating with other team members to safely make, test and ship changes to a shared code base, in order to implement changes in the behavior that our users and customers want to see in our applications. While customers and customer needs figure tangentially in these internal discussions, most of the focus in on purely technical concerns and choices.
From the outside, at best, we see “cards on a board” that show what the deliverables are, and where they are in a delivery workflow. There are no ways understand how the actual work gets done, how well it is being done, and if there are opportunities for improvement, short of being in the trenches with engineers and actually reading and working with code. Even as someone who can do this comfortably, it’s much too low a level of detail, to gain much insight into the larger processes at work for software delivery. This is why even most engineers on a team working on a large code base, have no real idea of the big picture of what is changing around them and why.
This is the big hole in the way we work today: engineering teams and stakeholders outside engineering, lack a common visual and process vocabulary to communicate effectively about the actual work of making and shipping software. We don't have good solutions to understand software development processes at a level granular enough to surface the technical complexities of software implementation in an quantifiable way, and yet appropriate for non-technical stakeholders to understand what it takes to “make software”.
For example, it should be perfectly reasonable for a business stakeholder to able to use accurate data to answer a question like "why did that feature take longer than expected to deliver?" or "does our application architecture allow us to ship the most valuable features for the business as efficiently as we possibly can?". Having this data allows them to be able to work with engineering on initiatives to improve the process and if necessary the code base, measuring progress and impact on business value. With internal engineering initiatives and customer facing work compared using the same yardsticks, all work competing for engineering capacity can measured for business impact, and prioritized accordingly.
We often use manufacturing analogies in software development, and for the most part they are the wrong way to look at the work of making software, but most managers on an assembly line have a much greater understanding of the process they manage, and have the tools to look at the work on the teams they manage, and collaborate with them on making the process work for everyone.
A few years back, we decided to take a shot and tackle this problem from first principles at Exathink Research. Our goal was to find a better way to answer these questions while working to turn our purely qualitative advisory practice for building high performance engineering teams, into one that was rigorous and data driven.
The key problem to solve was to find a good analytical model that captures the nuances of software implementation: the process by which requirements get turned into code by engineers. This is the most expensive and uncertain part of the software development lifecycle, and the hardest to analyze objectively to understand how a team is doing. It is also an area where there have been spectacular failures in the past: a target of much deserved skepticism, especially among my fellow engineers. And yet, it seemed crucial to solve this and have an analytical understanding the work of making software, if we are to get a more objective, rigorous handle on software development in general. So it seemed worthwhile to try.
We recently announced the public availability of Polaris Advisor: our solution to this problem, based nearly half a decade of hands-on R&D and work with clients, and one that we believe truly positions us to answer these questions with a new level of rigor. Our framework, Ergonometrics™, systematically allows us to analyze all phases of the software development lifecycle from requirements through release, including the development phase, and Polaris, a real-time Continuous Measurement platform lets software teams implement continuous improvements programs based on the framework, with zero operational overhead.
Our thesis is that having accurate knowledge, backed by data and rigorous measurements will give you and your team a new set of valuable tools to help ship more high value code, faster, efficiently, effectively and predictably.
The idea is to let you and your team take off the blindfolds and see the game in play, in real time, from beginning to end, with real time measurements that identify opportunities to continuously improve the development process, so that everyone on a team: engineers, managers, product owners and senior executives can learn how to ship products faster, and maximize the customer value delivered by your teams, using hard data instead of gut feel.
If you are an engineering leader, it will add a powerful new set of tools to your toolbox and help you lead your team more effectively.
If you would like a summary update of content we publish on this blog to be delivered to your inbox once every two weeks, please subscribe to our mailing list below. Or just follow me on Twitter or LinkedIn and get updates that way.
The Exathink Newsletter
Subscribe to receive the latest updates in your inbox.