Without a tangible understanding of the work undertaken by software workers on a day-to-day basis, and an understanding of that work in great detail, software managers are not equipped to know when they are requiring software workers to supplant safety with expediency.
Workplace safety is an issue of productivity. Lack of safety breeds low productivity. In the case of software development, "safety" is an issue of undertaking work in a fashion that doesn't expose the software worker to unnecessary risk - especially risk that is easily avoided. That is, it's easily avoided for someone who understands the work of software development at a high-enough level of expertise and detail to guide workers away from unsafe practices.
An unsafe practice in software development is any work done in the present moment that makes work in the next moment take longer to complete. A reasonable analogy for this is the notion in physical materials work of a clean workplace.
The purpose of a clean workplace is to remove the clutter that hides unseen, unsafe conditions. It's a lot easier to trip over an errant piece of material in the workplace when its surrounded by a profusion of other errant materials. When hazards are so common that they blend into the background, we simply stop noticing them, and this is how we come to be put at greater risk.
The more unsafe clutter we have in our workplace, the more we have to work around. The more we have to work around, the more time it takes us to do work.
If a manager can't detect the conditions that makes work take longer to accomplish than expected, he doesn't adjust his expectations for cycle time accordingly. This means that he expects more to get done - the original work, plus the workaround work - in an amount of time that would be reasonable for the completion of the original work alone.
This makes workers even more careless, exacerbating the accumulation of obstructions and hazards in the software development workplace as a result. It's a vicious cycle that usually only happens because someone with the authority to avoid problems in the first place doesn't have the experience to recognize the problems and to deal with them.
In the words of the ancient Chinese workflow master, Lao Tzu: "Confront the difficult while it is still easy. Accomplish the great task by a series of small acts."
Although, you can only do those small tasks if you can detect their presence. The finer your ability to perceive counter-productive deviations, the quicker you can respond to them. The longer you wait, the further off-course you'll be when you finally realize that you need to make a course correction. Someone who doesn't do work can't recognize when that work is off-course. Someone who isn't an expert in the work can recognize when the work is off-course, but his will only recognize it after it has become more expensive to deal with than necessary.
And this, as the average software worker would tell you, is the day-to-day conditions in which their work is done. Software work is a constant battle fought against the accumulation of hazardous clutter. Software workers rarely get ahead of the clutter curve, and have to invest significant effort to keep their workplace free of hazards. Paraphrasing Taiichi Ohno, if you're not moving forward, you're falling behind.
Each individual scrap of hazardous material in software development is typically quite small - small enough in fact to be easily deemed negligible - even by people who do the work. The hazard presented by two iota of hazardous waste in software isn't the sum of the hazards - it's the sum of the hazards compounded by some multiplier. Hazards in software don't exist in isolation, they interact with each other creating a higher order of hazards that is greater than the sum of their parts. Two pieces of hazardous software that interact with each other don't create two hazards, they create two hazards compounded by the amount of interactions between these modules. To make matters worse, all modules that interact with hazardous modules in close adjacency also get infected. Lack of day-to-day expertise in software work often leads to negligent underestimation of the risks associated with the "design hernias".
Software hazards compound very quickly. It's very easy to arrive at an explosive accumulation of hazardous software materials. It can happen within the first few days of a software project. If subtle hazards in the tools and frameworks that programmers are slated to use are not recognized immediately, the accumulation begins before the first line of code is even written.
When we ask software workers to continue to work in such conditions, we might as well send them into a mine filled with coal dust and require them to hyperventilate for several hours each day. Sooner or later they are going to have to escape the job to escape the hazards, or they are going to acquiesce to the irreconcilable differences between a manager's expectations and the realities of the working conditions that the manager himself feels entitled to not be exposed to for having already "paid his dues". This creates dispassionate cynicism in software developers. They come to learn that no matter what they do until they escape the work, they are more than likely going to end up on the losing end of the software work proposition. They learn that they'll get hung with the detritus of negligent software development decisions and direction because decision makers are too far removed from the work to see that it is these critical decisions and directions (or the lack thereof) that are the root cause.
In these working conditions, workers constantly face obstructions that are only in-place because they weren't recognized when they were mere seeds of problems rather than towering oaks of looming counter-productivity.
The accretions of hazards into founts of the depressed productivity that is status quo for most software teams is fully and completely avoidable - if only the hazards are dealt with when they are small. To see looming hazards when they're small, you have to have detailed knowledge of the work in the here and now. This is simply not possible when managers have removed themselves from the work.
It's far too common for software development managers to feel entitled to be removed from the work of software development - as if removal from the work is a reward for having done the work for a number of years. The reward for doing software development work for a number of years is not an escape from the work, but an immersion into a far deeper understanding of it so that its expert insight can guide software work away from hazards. Of all the escapes that can be orchestrated by software workers, an escape into management is indeed and in fact an act of pure negligence.
Software work is works in intangibles. Software hazards go unnoticed because they are physically invisible. Taking a bit of liberty with Donald Reinertsen in The Principles of Product Development Flow, we don't see the manifestation of software development safety issues "when we walk through the engineering department." They're "bits on a hard drive, and we have very big hard drives in product development."
Managers often try to compensate for their lack of detailed understanding through the use of summary representations like software diagrams. But software diagrams don't show the looming problems while they are still small; still manageable gathering storms that can be dissipated through judicious application of minimally-invasive countermeasures. Only flagrant hazards can be detected in summary representations. Small, detailed course corrections can't be plotted from coarse, summary information. And yet managers who feel entitled to be removed from the work perpetuate the fantasy that summary representations like diagrams are sufficient to bring their purview into action on the software projects they manage. This is pure folly, and software tool vendors are quite happy to continue to exploit it.
By the time you've detected a software hazard that can be seen in a summary representation like a software diagram, you're looking at a problem that has festered for far too long. You should be able to detect the chemical markers of the disease long before you notice that lump in a vital organ.
Programming work is almost entirely mental. Its effectiveness is influenced by psychology, cognition, awareness, and communication far more than by any material concern like ergonomics, an ultra-fast workstation, or multiple monitors. Software systems are far too large and far too complex to be held in entirety in the conscious focus of any single software worker in any single moment. The devil is always in the details. Summary representations are secondary to the actual raw materials of software: the code. You have to be in the code, and to know enough about code to understand which subtle design differences are looming problems and which aren't.
The average software system is a dark coal mine filled with the poison particulate of tomorrow's case of black lung. We ask developers to work in these conditions every day. We do this because at some point in our careers as managers, we believed that we were entitled to be removed from the details of software work.
Sooner or later, enough hazardous software material accretes in a software system that managers step in, and often the first thing they do is look for a root cause in the workmanship of the software workers. The workmanship is rarely the root cause. The absence of informed, skilled, and insightful software management and guidance is always a preeminent cause to unattended workmanship. The workmanship is, of course, a problem, but it's a side-effect.
This is like blaming miners for an underground explosion due to the accumulation of day-to-day hazards that result from institutionalized negligence of clear and present dangers that should be managed the moment that they show their first signs. We ask workers to undertake work that they know is not in their best interest. We ask them to do this from our organizational perches far about the hazardous conditions of software projects. And then we hang them with the inevitable costs of this kind of mismanagement.
It's the software worker that is required to spend extra time in the mine to balance productivity that is lost to management negligence. It's the software worker who is required to take on the duties on sanitizing data by hand without the safety of the proven automation that should have been built right to begin with. We ask them to validate the decisions that we've made about the tools that they'll use in their work based on little more than a compelling sales pitch from a tool vendor who claims to knows more about the day-to-day work of the software developers in our organizations than we do - vendors who are even further removed from the work than we are.
We ask software developers to work against their own interests and the interest of our organizations because we don't understand the hazards that our decisions create.
Software workers suffer disruptions to their lives and their livelihoods when this kind of institutionalized negligence drives work policy on software projects. They shoulder the accumulated detritus of uninformed and unskilled management decisions. And they suffer the humiliation of blame when lack of management insight can't see it's own reflection in the root cause mirror. And institutions lose the invaluable institutional knowledge when workers escape the organization altogether. It's always a no-win situation. Everyone loses. And it's completely avoidable.
If you're not willing to be in the day-to-day work of software development, you're declaring loud and clear that you're not qualifying yourself for the authority to make decisions that direct software work. Every decision you make risks adding to the accumulation of software hazards. When you do this, you deplete the workplace safety for software workers. You ask them to take risks that the workers themselves know that you can't see, and they know that because you can't see them, that you will likely not understand that the accumulation of avoidable hazards into full-fledged, clear and present dangers is your fault to begin with.
Workplace safety is a serious productivity issue. It's a serious issue for the health of workers. It's a basic expression of human respect for the people who work for software development organizations. Understanding workplace safety for software developers requires a high level of expertise in software development, and it requires day-to-day currency in software development. A manager who is not willing to have insight into the details of the work that he is responsible for is patently disrespectful of his workers. He constantly puts them into harm's way by presuming to express authority without knowing whether his expectations are hazardous to the health, well-being, and viability of software work and software workers. Beyond disrespectful, it's dishonorable.
The software field isn't at risk of programmers forming a united front and leveraging collective bargaining, and frankly such a thing isn't likely what anyone wants. But dealing with the root cause issues that have driven other industries to such actions is a win for everyone when obstructions to productivity are removed in the process. Addressing software development workplace safety is a good place to start - especially in our current economic conditions, where a massive boost of unimaginable untapped productivity would be a welcome change indeed.
Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience
Learn more about my work and how I can help you at ampgt.com