Friday, September 11, 2009

Analysis: CodePlex Foundation - The Terms of Mutual Surrender

Microsoft announced the CodePlex Foundation yesterday. The strategy is quite compelling, and frankly, it points to a watershed moment, a turning point for the Microsoft platform and .NET community, as well as for Microsoft.

The CodePlex Foundation initiative translates resoundingly to one great thing: Opportunity.


In April of 2001, Microsoft released Beta 1 of the .NET Framework and Visual Studio .NET. At the same time, Jim Newkirk and a couple of industrious .NET early adopters with a track record in agile development in Java released NUnit, an open source unit testing framework for .NET.

Whether NUnit was the first open source application in the .NET ecosystem, who knows? Nonetheless, NUnit was the first open source .NET software for many .NET developers who looked to open source for answers even before .NET itself was released. Jim is presently a CodePlex Foundation advisor and Product Unit Manager for Microsoft's CodePlex open source repository.

To say that the rest is history is a predictable cliche. Whatever has happened in open source in the Microsoft space up until now, as astonishing as many .NET open source accomplishments are, it may pale in the face of the opportunities that are available in light of the CodePlex Foundation initiative.

Of the time between April of 2001 and the present, the following can be said of open source in the Microsoft space: It was a constant battle. Open source was stigmatized. We had to fight to use open source software in our jobs. And when we weren't fighting to use open source, we were fighting to not have to use commercial clones of open source software that often weren't nearly as robust as the open source.

Certainly things have been getting better in the past few years. More organizations have opened up to open source - especially when open source solutions are the best-of-breed - but Microsoft's own lingering reticence toward open source remained the elephant in the room, and a significant influence over the customer community's perception of open source.

And even though Microsoft has been making some inroads into open source over the past few years, Microsoft's own open source has remained constrained. With a couple of exceptions, Microsoft did not accept contributions to its open source from contributors outside of Microsoft. And Microsoft developers were discouraged from or disallowed reviewing open source code for fear of infecting Microsoft products with potential "viral" effects of some open source licenses.

Imitation is the Sincerest Form

In the past eight years, Microsoft has learned a lot from open source software. A number of Microsoft product ideas were proven first by open source projects. However, because Microsoft itself has not been open to open source solutions, and because a good bit of Microsoft's customer community follows Microsoft's lead on many fronts, the only way that Microsoft could provide its customers with the value available to open source users was to develop new products from scratch.

Microsoft's efforts at duplicating the value offered by open source were fraught with problems. Microsoft employees can't look at the open source code, so Microsoft's competing offerings weren't often too terribly competitive. They would be adopted by Microsoft customers nonetheless on the basis of Microsoft's merit as the largest commercial software foundry.

All this leads to an oft-repeated heart-breaking scenario for .NET developers who are already well-versed in best-of-breed .NET solutions that have matured in the open source world: showing up for work one day to learn that the company you work for has chosen to go with an immature, inferior solution that Microsoft had been left with little commercial choice but to build. This sinking feeling is followed up quickly with the realization that a company the size of Microsoft can't release innovations and fixes as fast as an open source project. The Microsoft clones of open source projects progress slowly while the open source solutions drive ahead with innovative and often more productive solutions.

Ultimately, this cycle repeated year after year and often created an ever-deepening insularism in Microsoft that exasperated the problem even further.


The CodePlex Foundation will bring influential open source projects under its auspices. The details aren't clear yet, but it's reasonable to assume that the foundation will support its projects the way that other software foundations support their projects, with protection for these projects as they are used in corporate and commercial contexts and who knows, maybe even some financial support will be part of the deal.

The single greatest opportunity that the CodePlex Foundation represents is an end to orthodox resistance to open source by Microsoft, and its customer community by extension, and the dawn of a new day of .NET where open source can be openly embraced.

In a potential tomorrow, the best tools for the job aren't sources for intellectual property suspicion, talented software craftspeople have greater freedom to use the tools that they have built significant mastery of, methodologies and techniques aren't driven by tool limitations, and innovation is free to move it its own pace, and can do so as a cooperation between industry and community.

And most importantly, in a world where Microsoft begins to embrace open source, it begins to subject itself to the open competitive forces that will make its products better, and make Microsoft itself a leaner, more agile company.

In this world, .NET open source champions aren't relegated to a relatively small backwoods, but are granted the broad regard and respect that is common place in the Java, Ruby, Python, and PHP worlds, among others with rich open source culture and history.

Also consider that in this world, we're one-step closer to the level of comfort with open source for Microsoft where community contributions to Microsoft's MSPL projects can be possible. The arrival of the CodePlex Foundation doesn't provide for this, but it does provide for the next few significant steps on the way.

This effort will bring Microsoft staff into close encounters with open source software and open source software projects, breaking down the barriers that the community has descried for a long, frustrating time.

I understand that in concert with this effort, Microsoft is also changing its policy for open source contributions for its staff, allowing staff to make limited contributions to open source without the requirement of oversight from Microsoft legal staff! This is a significant shift in policy for Microsoft and points to some significant reconsideration of policies and philosophies of the past.

What Price Freedom?

The source of Microsoft's trepidation over open source hasn't changed. The central issue is still, and will remain for the foreseeable future, intellectual property risk. The source of the trepidation for many of Microsoft's customers remains intellectual property risk.

The solution to the problem is as obvious as it is genius, but the price of freedom isn't without some non-trivial compromise, and a challenging new paradigm for open source leaders to consider.

The intellectual property risks can be greatly (if not entirely) mitigated when intellectual property is assigned to an intermediary, and that, among other services, is what the CodePlex Foundation is for.

Put frankly and directly, the CodePlex Foundation is given ownership of the code.

Per the CodePlex Foundation Assignment Agreement:

"Assignor assigns to Foundation its entire right, title, and interest in any copyright rights that attach to the Code and any documentation delivered with the Code."

Per the terms of the agreement, the Foundation grants an irrevocable license to the code to the original owner. Again, from the CodePlex Foundation Assignment Agreement:

"Foundation grants Assignor and its affiliates a perpetual, worldwide, non-exclusive, royalty free, irrevocable license, to reproduce, modify, create derivative works of, display, publicly perform, sublicense and distribute the Code (and derivative works thereof) as Assignor or its affiliates see fit, including the right for Assignor and its affiliates to sublicense the foregoing rights to third parties."

Ultimately, this means that it's business as usual for the open source projects, with the additions of the protections of the CodePlex Foundation, and greater opportunity for project adoption in spaces that were not previously open, and the chance to participate in what might be a more cohesive and cooperative open source community at large in the Microsoft space.

The Inevitable Distrust

There's no avoiding the issues of distrust that will surface from this. The usual suspicion is inevitable considering the history of Microsoft and open source, the invitation to open source projects to give ownership of their code to the CodePlex Foundation, the preponderance of Microsoft staff on the foundation's interim Board of Directors and Board of Advisors, and the branding of the foundation with an existing Microsoft brand: CodePlex (also the name of Microsoft's public open source repository and community site).

Microsoft has a habit of springing things on the community. But then, so does Apple, and so does Google. Love it or hate it, this is how product companies launch products and how they protect themselves. It doesn't always work out so well, and for my money, Microsoft is the least best of these companies when it comes to operating without early and continual customer feedback in product development, but, well, there you have it. That said, the CodePlex Foundation isn't exactly fully operational yet, and for all intents and purposes, this is the opportunity that the CodePlex Foundation has provided for community input, and the foundation staff has gone out of its way to make this point clear on the foundation's website.

The easiest way to staff up the foundation is to do what Microsoft did. It tapped the people who worked to make the CodePlex Foundation happen for interim positions in the leadership of the foundation. It also assigned some serious business acumen to the interim Board of Directors. Having Microsoft staff in-play on the Board of Directors is just a smart, immediately-sustainable thing for Microsoft to have done with the investment that has been set in motion. The Microsoft staff on the Board of Advisors include many people who are friendly to open source and who are personal friends of many people in the open source community.

And there are a few key names that should likely be on that list. Ayende Rahien and Jeremy Miller immediately come to mind - people who have brought a number of influential open source projects to life, and help to bring those projects into enterprises and ISV's around the world. But this thing is just getting started. It's a "soft launch", as one of the members of the Board of Advisors put it.

So why would Microsoft collude the namespace by naming this fledgling foundation after its CodePlex program? Sure, they're both about open source, and sure there's a healthy dose of Microsoft in the mix, but why not make the effort to disambiguate right off the bat?

John Peterson, a veteran software developer, former Microsoft MVP award winner, and attorney, has been helping me to understand the CodePlex Foundation agreements, and I like his take on the CodePlex naming, and why it's a good idea.

Microsoft has donated one million dollars to the foundation to get it up and running. It has an accountability to its stockholders for what it does with its cash. And while one million dollars might not seem to be much compared to Microsoft's bank balance, it's not exactly a trivial amount when it comes to charitable contributions to what appears to be a fairly radical cause.

The perpetuation of the CodePlex brand is just a good investment, and it likely helps this kind of move go down easier with folks in Microsoft's stockholder community who might not entirely understand what this open source kerfuffle is all about. Microsoft is making this thing happen. It's reasonable that it gets to pick the name, and to use a name that highlights other aspects of its open source efforts.

The Windup

I'm not usually the first person to extend unquestioning trust to Microsoft, and I started my day yesterday with the inevitable distrust and backlash, but I think there's something much more significant here that deserves more than just the usual distrust, and might likely be better served with an unusual trust.

Imagine for a moment if this effort had happened five years ago. Think of all of the often-frustrating pieces of the Microsoft stack that we've had to deal with - software influenced by open source, but missing the target because of Microsoft's policies on staff exposure to intellectual property risks - real or imagined.

Think of all the tools and all the libraries that have shipped from Microsoft over the past five years where your response was a despondent, "Oh no, not again." Part of the reason why we have to contend with these tools is that Microsoft, up until now, has not been able to truly learn from the many mature systems from the open source world that it is called to address with offerings of its own that cary Microsoft's implicit protection from intellectual property risks.

Microsoft isn't going to simply change its well-healed habits on a dime, but we're at a moment where turning in the right direction will set in motion the chain of events that will ultimately change Microsoft's culture in regards to it's ability to see, to leverage, and to participate in some of the incredible work that is being done outside of Microsoft's walls.

The Pitch

I'm hoping that the influential open source folks in the .NET community will consider the CodePlex Foundation's invitation, as odd as it may seem, to consider the possibilities for a future where the .NET community at large has the same common sense perspective on open source as the Java community, the Ruby community, and all of the other communities who's no-nonsense perspective on open source we often covet.

Sure there are lots of details to be ironed out with the CodePlex Foundation program, and the next few weeks will be telling in that regard, but it's with the participation of the open source community that the changes that we've been talking about for years can finally get underway.

I don't own an open source project that I've invested years of my life into, but I can guess what it must feel like to have it suggested that ownership of such a project be handed over to a foundation, and a foundation with very close ties to Microsoft at that. The open source community is being asked to meet the resistance half-way, and to hammer out a program that history will recognize as the turning point for Microsoft, it's customers, and almost every aspect of Microsoft community, culture, and product development.

If you're an open source project owner, think of the possibilities of having your framework or your product begin to reshape the expectations for craftsmanship of Microsoft staff and the greater Microsoft community at large. No one is in a position to require an open source leader to assign their copyright to an intermediary, but the first few influential open source leaders who do meet the CodePlex Foundation halfway will set in motion the kinds of pervasive, positive changes that will change all of our lives and careers for the better.

It's a relationship that starts with one hell of a compromise, but it could be the beginning of a beautiful relationship. Possibly even a historic one.

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at

Sunday, July 19, 2009

Do Agilists Understand Lean?

One of my previous bosses, Steven "Doc" List used to tell me that I'm at my best when I'm teaching. Sometimes, Doc's compliment would strike a raw nerve. Programmers on our team were entitled to choose whether on not to learn. This wasn't an explicit policy, but was engrained in our culture.

We valued learned people, but learning and teaching was not part of our culture or our organizational mechanics. We valued ready-made learning when it walked through the door, but our organizational learning didn't go further than the Agile Retrospectives materials and practice that was fashionable at the time. That's no fault of Agile Retrospectives per se. It's a fault of turning it into something fashionable, and inevitably conferring the unconscious orthodoxy onto it that was steadily growing in Agile methodology culture, often obstructing the line of site to the need for a higher order of teaching, learning, and management culture.

Unless team members understand that there is a requirement to be students in an organization, and to study under a teacher, pride and prejudice will likely obstruct the acceptance of a formal student/teacher relationship, and attempts at teaching will very likely devolve into the predictable butting of alpha geek heads over design and process ideas. And this portends obstructions to meaningful and methodical continuous improvement driven by program goals, and a rise of wild, uncontrolled experimentation.

In his book "Managing to Learn: Using the A3 Management Process to Solve Problems, Gain Agreement, Mentor, and Lead", John Shook tells the story of a manager who's supervisor once told him, "If the learner hasn't learned, then the teacher hasn't taught".

I used to tell Doc that it is painfully frustrating to teach knowing that our staff understood that they were not required either by culture or by supporting policy to be in the role of learner - especially when program and project expectations were not being met.

It's true that if the learner hasn't learned, then the teacher hasn't taught. It's also true that if the learner doesn't show up, then the teaching doesn't even begin.

Leaders at successful Lean organizations have pointed out that companies who have failed to duplicate Lean successes often do so by trying to adopt Lean as a process improvement effort rather than an effort to create a learning organization. Despite all of the interesting and beneficial mechanical aspects of Lean, Lean is about creating learning organizations. Student/teacher relationships and protocols are a part of learning organizations, with each role having the responsibilities of that role.

There has been a bit of a dust-up in the agile community lately as to whether Kanban work management is necessarily non-agile or anti-agile. The premise being that Kanban can foster environments of directed work, limiting workers' ability to self-organize, and fostering a disrespectful environment for workers as compared to forms of work management and organization common to agile methodologies.

The essential issue here is the issue of respect, but I find that the Agile perspective is willing to take advantage of incomplete and opportunistic definitions of respect. Kanban proponents have rightly pointed out that respect for people is an explicit Lean principle, and that self-organized work is still happening within Kanban workflows.

I see two perspectives talking past each other, and unfortunately, I see Kanban practitioners being backed into a corner, retreating, apologizing for Kanban, and softening the message to make it more palatable by the dominant methodology culture. Ironically, this was the exact position that the Agile was in at the start of the decade when Agile was struggling to make headway against prejudice and misrepresentation by the preceding traditional methodology culture.

Agilists are concerned about returning to the bad old days when disconnected managers directed work from outside the context of doing that work. It's a serious issue that deserves serious concern. It's a serious enough issue that it demands rigor on the part of the mainstream Agile community to engage the effort to understand Lean more deeply than the often cursory glances and biases projected at Lean and Kanban.

I sympathize entirely with the concern of returning to the bad old days of pre-Agile bureaucracy, but I'm equally concerned about the same tendency for Agile bureaucracy to occlude the meaning of Lean and Kanban.

My study of Agile began in 2000 when a mentor from Bell SIGMA turned me on to XP. My day-to-day immersion began in early 2001. I don't want to return to the bad old days either, but I don't want to go forward into a revised, 21st-century kind of bad old days issuing from the same mechanisms of bias and presumption that Agile itself faced, and often continues to face.

I wonder if agilists at large, in the spirit of inspection and adaption, are taking the time to understand Lean and the organizational and cultural context where Kanban thrives. With Agile, we asked organizations and cultures to consider change. I wonder now if Agile is able to respond to the same challenge.

On the surface, the following statement should rile a mainstream agilist. At least, it certainly used to rile me. It riled me enough not to act upon it even when my instincts told me that it might make a world of difference between immanent failure and rescuing a project, its team, and a considerable investment.

It's perfectly acceptable for a manager to direct the team from a position of traditional, hierarchical, directive authority.

I'm taking egregious advantage here in setting this stage. I'm purposefully leaving out the implicit context inherent in Lean. If I looked at the preceding statement through Agile's lens, I might very likely be worried about it. Looking at the statement through Lean's lens, I'm perfectly comfortable with it.

If we don't look at Kanban through Lean's lens, we're committing the anthropological cardinal sin in failing to realize that we're projecting cultural bias on what we're observing. We're even failing to recognize that Lean may have culture and behavior that is based on different assumptions and biases.

Kanban isn't a return to the bad old days of disconnected, directive authority because the position of management in-situ in a Lean organization isn't the same position of management that Agile efforts are commonly called to contend with and the behavior of management that many of Agile's protocols are shaped to deal with.

A manager of a Lean software development team isn't a remote figure who is no longer in the game. The manager is on the team, and he's one of the most competent technologists on the team. The manager in a Lean organization is also a teacher.

A team that includes a manager with directive authority is still self-organizing. The manager is internal to the team. A manager's expectations aren't disconnected from the reality of the work. And when those expectations aren't being met, he can chose to use directive authority to guide the team to counter-measures through teaching. It's also the manager's duty to help people on the team to develop critical thinking skills and instincts that serve problem recognition and resolution in support of the goal.

The expectation for team members to fulfill their duty as students is part of the managers directive authority. Refusal to engage in the protocols of the learning organization is deeply disrespectful to the organization of people as a whole, and to the manager as a person.

Looking at Lean through Agile's lens is perfectly reasonable. It gives a comparative perspective that can help us understand differences and find meaning. But ultimately, Lean should be seen through Lean's lens and should be assessed from the perspective of its native context.

There is a greater issue of respect and disrespect that is inherent in Lean as seen through the Lean lens. Respect is a two-way street. There is the respect for workers and the respect for managers, or teachers. The mutuality of respect is what makes respect possible. When respect ceases to be mutual, it ceases to be sustainable, and will soon disintegrate.

One of the most intractable issues we have in software development cultures is the lack of line management that remains technically-competent. In fact, line managers in software development are often people who choose to escape to management when they discover that they don't like making software.

We're presently dealing with the effects of several generations of software managers who don't really have much of an idea of what software development work is in detail, which means that these managers can't be effective teachers. Workers don't end up with the teaching that makes them effective and makes the work rewarding. The cycle perpetuates itself.

Agile has been a powerful palliative in dealing with this organizational and cultural snag. By putting a firewall between the deleterious effects of directive authority that is too far from the work and the work itself, agile succeeds in restarting the failing heartbeat of getting software made.

In the software industry, Lean is seen as a kind of specialization of Agile, and that's a unique thing for Lean in industry in general. It's also possibly a detriment and maybe even a disadvantage.

Agile is increasingly encumbered by its own presumptions of organization and culture. Agile's biases are slowly fading into background consciousness, becoming unconscious. As Lean is inevitably and unconsciously seen through the lens of colloquial Agile, many of the organizational assumptions and biases of Agile are projected onto Lean without even realizing that these biases are there.

Lean in its essence is a path to critical thinking, but not a solo path. It's a directed path. A Lean organization is a learning organization. It has teachers, students, curricula, and protocols all focused on meeting the expectations that support the organization's holistic goals for productivity and producing.

Lean is a means to find the unasked question. There appears to be an unasked question in the Agile perspective of Kanban and respect: Do we expect that a Lean organization is the same as an Agile organization?

This isn't meant to be a condemnation of Agile, but it is meant to point out that if Agile isn't careful, it will become the same kind of problem that it sought to solve.

On my project with Doc, I was valued as a teacher. I was the person who got executive support for the project. I shared product design responsibilities with our product owner. I would go to bat for the team when hard decisions needed to be advocated to the executive. I did the technical screening of candidates and made my recommendations to the team about hiring. And I was responsible for setting program goals and expectations for technical implementation.

In the end, I parted ways with the team when it became clear to me that the team had become intractably self-determining, which is a potential risk in self-organizing teams when technical competence and directive authority are not invested in the same person. Ultimately, the team failed, a non-trivial percentage of our small company revenue invested in the project was written off, and the entire team was dismissed.

This isn't a typical Agile scenario, but it is very much a possibility faced by teams in situations similar to ours. The failure likely points less to an implicit weakness in Agile and more to an explicit strength in Lean: respect is indeed a cornerstone of success, but if respect isn't holistic, it risks introducing the opportunism that can see to the failure of otherwise meaningful software development efforts.

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at

Friday, July 17, 2009

Lean Reading List

I read Mary and Tom Poppendieck's first book on Lean Software Development, "Lean Software Development: An Agile Toolkit" in 2005. It went in one eye and out the other.

I was the Software Development track chair for Austin's InnoTech conference in November of that year. Mary was gracious enough to accept our invitation to come to Austin to keynote the track and to moderate a panel.

I listened to Mary's talk and got a few more clarifying tid bits from it, but mostly, I dismissed it at the time as some form agile sideshow that didn't quite measure up to the specifics that XP brought to the table. A number of us went out for burgers with Mary after the conference and continued the conversation, but I don't think anyone in the group was fundamentally moved by Lean.

I wasn't equipped with the experience to see Lean for what it was. Later, I realized the deep hubris that encumbered my thinking and the vanity that would lead me to expect that I could intuit a subject as vast as Lean from a single book, the way I could with XP and Scrum.

I suffered a serious set-back in 2007. My career's pinnacle dream project was flirting with disaster.

The company I worked for faced an intractable intellectual property constraint that limited our product's market to a small fraction of the whole. In November of 2006, I sold the company on an ambitious plan to solve the problem by building our own platform that we would have unlimited rights to sell. We started exploratory work and envisioning in December. The undertaking had significant executive sponsorship. When our board of directors voted to not fund the project, the CEO bought out the board and gave us the green light. We started official work in earnest in January.

In November of 2007, I sounded the failure warning alarm to my management, and I kept sounding it. In January of 2008, with the situation becoming increasingly intractable, I parted ways with the project and the company. Three months later, after having given the team a chance to pull itself together, the project was canceled and the team was dismissed from the company - from the most junior technologist, up to the executive overseeing the project.

I was frustrated by having felt handcuffed by Agile development orthodoxies that no longer fit the problems of the team, and yet were followed mechanically, and reinforced by management that was captivated by its first exposure to Agile.

I told some of the details of my experiences to a friend. We talked about Agile, in many variants, including Lean Software Development. My friend recommended that I read about the Theory of Constraints and recommended "The Goal: A Process of Ongoing Improvement" by Eliyahu Goldratt and Jeff Cox. In reading The Goal, I recognized the detailed mechanical process that I faced in the failed project. At that time, I hadn't connected what I had begin to learn in The Goal with what I know about Agile Development and the gaps in Agile that I had begun to see after seven years of immersion.

Following The Goal, I wanted to read more into some of Lean's roots. I had largely avoided the Toyota literature up till that point, and I wasn't convinced that I would get much from The Toyota Production System. I knew a number of people in my community who had read The Machine that Changed the World and The Toyota Production System, but I never really got the sense that the reading had connected them with the transformative learning that I had experienced in Agile development.

Before committing to throwing myself into the Toyota literature, I wanted a sample of what I might be getting into. I started with an article. I read "Decoding the DNA of the Toyota Production System" by Steven J. Spear and H. Kent Bowen published in the Harvard Business Review.

In this article, the authors talk about the things that companies seemed to miss when trying to duplicate Toyota's successes using Toyota's methods. I'm a sucker for stories of unasked questions. The rest of my reading about Toyota and Lean in general would be an exploration of the unasked question. The authors' message: Toyota's fundamental nature as a learning organization is often overlooked, with undue attention paid to Toyota's more obvious practices and mechanics.

And this is what drew me in to the Toyota literature. Here I saw the parallels to my failed organization, my own nature as a learner, a seeker, a pathfinder, and a teacher. I recognized the many of the intractable problems that I had observed in the behavior of my failed organization.

On my team, it had become an unspoken entitlement conferred that people were not required to take direction. This was largely exasperated by a management approach that was fixated on social experimentation and which was not capable of guiding the technical execution or product design imperatives of the project. It was an Agile methods laboratory that produced no real acceptable features for a year.

I learned in Spear's and Bowen's article that the organizational structure, mechanics, and protocols that I felt would benefit the team were the basis of Toyota's organization and culture as I was coming to understand them.

The Toyota Way by Jeffrey Liker was my first read into the Toyota literature. I chose this book to specifically continue learning about the Toyota DNA rather than dig into Toyota's specific process mechanics. Understanding that focusing on the process mechanics led to common problems in learning and adopting Toyota's methods, I wanted to hold off on the obvious aspects longer.

I read the Toyota Way with the observations of the Harvard Business Review article fresh in mind, as well as the mind-opening lessons about work management and problem solving from The Goal providing a backdrop.

Cautious of becoming yet another Toyota disciple, I took a turn away from Toyota-specific literature and back toward Lean in general and read "Lean Thinking: Banish Waste and Create Wealth in Your Corporation" by James P. Womack and Daniel T. Jones. This book tells the story of the companies and people taught by Womack and Jones as they traveled around the world after writing The Machine that Changed the World. It reinforced and what I had learned about learning culture and continuous improvement as well as organizational structure and process in the books that preceded it.

While this reading and studying was happening, I was also tweeting about my experiences and studies, and having numerous long conversations on the phone with some of my peers who had themselves started studying the same material. I also revisited my failed project with the previous product owner, who was still with the company, and still succeeding in his own work.

If I had read this material in a vacuum, without any of the constant interaction with my professional network, and the continual revisitation of previous failure, I believe it would have been a much less informative and transformative experience. My circle of friends and network of colleagues continues to inform my learning, and I expect that it will continue to do so.

I turned the reading back to software, choosing to read Mary and Tom Poppendieck's second book, "Implementing Lean Software Development: From Concept to Cash". Reading a Lean software book from my vantage at that time was a very different experience than when I had read the first Lean software book. The subject had now come to life. It was tangible, and it was deeper than what I had presumed previously. From here, my perspective of Lean Software Development takes meaning beyond my perspective of XP and Agile culture and mechanics.

I had organized the ALT.NET Open Space Conference the previous year, and hadn't want to simply fall into the trap of trying to duplicate an original moment. I wanted another theme for the second annual conference. Over the course of many of those conversations with friends and colleagues, we talked about the essential force of the ALT.NET movement as something akin to Lean's Continuous Improvement. The theme of the second conference became Continuous Improvement, with hopes that we would come to understand it more, and maybe understand better whether ALT.NET is indeed a Continuous Improvement culture.

I subsequently read another Toyota book, "Extreme Toyota: Radical Contradictions That Drive Success at the World's Best Manufacturer" by Emi Osono, Norihiko Shimizu, Hirotaka Takeuchi. This book takes on seeming contradictions in Toyota's culture and organization, such as the simultaneity of both a flat organization and a rigid hierarchical organization and the imperatives of a learning culture and continuous improvement that unify the two. The book also spoke about the climate of contradiction that Toyota uses to stimulate creativity and problem solving.

Tom and Mary accepted our invitation to come to the Continuous Improvement Conference in Austin and share their experience with learning organizations, software development, product development, scientific method, and leadership. I can't imagine that Mary and Tom got as much out of the experience as we did, but they had a lasting impact on our community.

Mary and Tom often say that Lean Software Development is informed more by the Toyota Product Development System than the Toyota Production System. Dave Laribee got to the Toyota Product Development book before I did and warned me that it was quite dry but worth reading, and it was. "The Toyota Product Development System: Integrating People, Process And Technology" by James M. Morgan and Jeffery K. Liker spoke a great deal more about requirements, design, planning for work, and creating workspaces, as well as the risk-mitigation processes that Toyota uses. This book also goes into greater detail about the Toyota's people and roles, and fostering "towering technical competence".

The TPDS book also talks about Lean leadership and the Chief Engineer role. It further reinforces the notion of managers and group leaders as having great technical competence, often knowing the work of their staff better than they do, and being paramountly responsible for teaching their staff though the scientific method and Toyota's A3 report technique.

Dave read David Anderson's book, "Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results" while I was reading the TPDS book, and I followed the TPDS book with David Anderson's book. I wish I would have read this book years ago, but again my biases obstructed my perception of the value I would get from it. In this book, David Anderson ties the Theory of Constraints directly to software development and translates pull systems and throughput accounting to the work.

Dave turned me on to Cory Ladas' writing on the Lean Software Engineering blog. I read a few articles, and then went back to the start of the blog and read forward. Much of that writing has been compiled into his book "Scrumban - Essays on Kanban Systems for Lean Software Development". Cory’s writing goes quite deep into pull systems and Kanban for software development. This material and David Anderson’s book offer a profound exploration of applying the mechanical aspects of Lean.

I put into practice what I learned both from my own experience and from studying and study groups on a project with a distributed team in Austin starting in August 2008. My instinct at first was to overlay their existing organization and process with Scrum. Instead, I looked for signs of what was keeping the organization back, established some basic measurements, and did my best to teach what I knew about how to solve these problems. It wasn't trivial work, but it reaffirmed my experience and study of both the cultural and behavioral aspects of Lean, as well as the mechanics, such as Kanban. The experience also continued to reaffirm XP practices as the tactical foundation of both Lean and Agile strategies.

While in Austin, Mary and Tom mentioned the book "Managing to Learn: Using the A3 Management Process to Solve Problems, Gain Agreement, Mentor, and Lead" by John Shook, which tells the story of the techniques and processes used in learning culture and learning organization, and goes much deeper into the the teacher/student relationship and responsibilities that are the foundation of Lean.

The book that really brought Lean Software Development together for me wasn't a book that is necessarily about software development, but about the kind of work that software development is: Product Development.

I'd heard Mary and Tom say for years that Lean Software Development is a closer kin to Lean Product Development than Lean Production, or in reference to Toyota, Lean Software Development is informed more by the Toyota Product Development System (TPDS) than the Toyota Production System (TPS).

In Don Reinertsen's, "The Principles of Product Development Flow: Second Generation Lean Product Development", many of the instincts, intuitions, and insights that had been guiding my own work were given a voice, and augmented with Don's experience and perspective. This is a book that is as much a watershed moment in my career as Kent Beck's "Extreme Programming Explained: Embrace Change". It's a book that stitches together many disconnected pieces of learning through a number of years of experience and observation, and then builds a new level on top of this reinforced foundation.

I think that this book deserves to be read much earlier in the list than in the chronology of my own studies. Even if you don't get past the half-way point - it's a clear elucidation on persistent problem in software development management, and the failure to manage software development from a product development perspective rather than a manufacturing perspective, and the failure of software managers to recognize that they don't know the difference.

I've had Atul Gawande's, "The Checklist Manifesto: How to Get Things Right" on my desk for the the past nine months. I haven't opened it yet. It's been on my mind.

On a recent visit to the Toyota plant in San Antonio, I met Mark Graban, who was also there for the tour. I asked him about the book, whether he had read it, and to try to get a feel for where I might put this book in my reading priority. I suggested to Mark that I've got a feeling that it might give me some insight into mistake-proofing, and for enriching the use and usability of my Lean work management app, Floverse.

Mark highly-recommended this book, and so, to the top of my list it returns. I'll update this article once it's done.

And there are yet more books that I've been meaning to get to, many of them in the nascent Lean Startup field, where Lean is applied as a methodology to entrepreneurship and product and business startup. These books will likely get a mention in this article at some point along the way as well.


I wrote this article because people in my network asked me for some recommendations on books on Lean. The more I thought about it, the more I thought that listing some titles and links might not be entirely responsible.

The books that I read and the order in which I read them is inseparable from the context in which I read them. This isn't a canonical list and it shouldn't be treated that way. There are many, many more resources available.

I've benefited tremendously from the choices I've made for study and for engaging community to enliven that study. I wholeheartedly recommend the books I've talked about here, and I would even recommend going through them in the order that I read them. In retrospect, the reading order worked well as an evolutionary thread and helped me reinforce a deeper understanding of subtler aspects of Lean while continuing to layer on ever broader knowledge.

Nonetheless, your mileage may vary, and if your experiences are quite different from those I've laid out here, then you might disregard my own learning adventure and concoct your own.

Either way, foster a community to learn with, and sit at the feet of as many masters who'll tolerate your presence. Anything you learn from a book is just material until you light it up with experience (or reflection) and turn it into knowledge. Learning in isolation rarely has the yield of learning enlivened by experience and community. That's not always the case, but if you have a tendency to hide in a cave, understand that much of Lean is a social practice.

I still haven't read The Machine that Changed the World or The Toyota production System. I may read them at some point, but my goal isn't to consume every Toyota book that I can find. My goal is to synthesize as much understanding as I can, and the past two years have been very rewarding in this regard.

I recently founded the Lean Software Austin group, and I'm looking forward to continuing the study and the work in Lean principles and software development as this community grows as a learning organization itself. The story doesn't end here, but the narrative reading list does (for now).

For your convenience, here is an actual list of the reading I referenced in this article:

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at

Sunday, July 12, 2009

The Myth of Developer Productivity

There are a couple of predictable approaches to an imminent car crash. You could throw your hands in the air, scream, and hope for the best, or you could keep your hands right where they are and try to pilot your way through it to the last responsible moment. If you're the pilot of a software team, offloading the responsibility for productivity is like taking your hands off the wheel before you've left your driveway.

The quickest way to shut the door to productivity is to try to solve it exclusively as a tools and automation problem with tools that promise "Developer Productivity". Tools and automation are essential parts of getting software done, but unless you have a firm grip on why you have productivity problems, taking someone else's word for why a tool or automation will solve your problems amounts to little more than a shot in the dark. And since my word is inevitably "someone else's word", please look more deeply into the issue beyond this article.

Productivity that Matters

There's productivity that matters and productivity that doesn't matter. That probably seems nonsensical - after all, if you could gain productivity, wouldn't it matter? It depends on wether the productivity you gain causes a commensurate obstruction at some other point in your software development bucket brigade.

One heck of a lot of tools that are sold specifically as "Developer Productivity" tools create productivity increases for the developers in your development pipeline, and kill productivity for downstream work centers, like testing, packaging, shipping, installation, configuration, and operations.

Focusing on developer productivity without considering the effects of developer productivity efforts on the rest of the whole pipeline will usually create momentary productivity; productivity that only developers will feel, and often only for a short period of unsustainable time.

Software teams and organizations continue to fail to realize sustainable productivity by continuing to make improvements in one area of the pipeline without realizing the cause-and-effect relationship between the localized improvements and resulting degradations in other parts of the pipeline.

Sustainable productivity is the only productivity that matters, and it's the only productivity that can withstand ever more continuous improvements.

Productivity is about product. It's the activity quality of product, or producing, or production - the "ivity" of product. It's about producing and the production of the only thing that really matters - the final product that can be used to achieve the business's goals.

If the goal of software development is just to create developer artifacts without ever delivering them, then developer productivity itself in isolation would matter a whole lot more. The problem with "developer productivity" is that it is inherently productivity in isolation. Productivity in isolation is often naive local optima.

Local Optima

Local optima problems happen all the time in software organizations. They happen when we try to make improvements in one area of the whole software development workflow without understanding the effects on other areas of the workflow.

Developer productivity as we know it colloquially is inherently a local optima concern. My first concern is the productivity of the entire fire brigade. The moment that there's some obstruction anywhere in the whole workflow, the problem will spread outward and poison adjacent work centers, rippling outward, and sometimes making quantum leaps into parts of the pipeline that are, on the surface, seemingly disconnected.

It's not enough to just understand that a change in one area of team's workflow will have effects on another area, it's also critical to understand which kinds of effects will create obstructions for the whole effort.

Greater productivity often come less from increasing the speed that we can do the work itself, and more from recognizing and decreasing the obstructions to production and producing. One of the most obvious ways to attack the productivity problem is to reduce rework and the things that lead to it.

Reducing rework means undertaking organizational and cultural changes. It means that the making of software and the proving that it's right can't be allowed to work at dramatically different paces, which is usually the case with most software teams (even teams using Agile methods).

Every time that testers get backed up in their work, untested software piles up in front of testing. That pile of untested software is the unproven foundation that developers will continue to build upon wether or not it’s sound. As the pile grows, the likelihood of rework grows geometrically, proportionate to the size of the pile, and the near certainty that structural design is insufficiently precise grows along with it.

When the making of software and the testing are disconnected from each other, and software makers and software testers work at full speed regardless of whether they're building up piles of risky inventory, then productivity is going to degrade.

In this kind of situation, doing things that increase programmers' speed isn't going to help productivity at all. In fact, the right thing to do is to either slow the programmers down until the testing inventory is cleared, or to have the developers change hats and work with testing to clear the obstruction. Increasing a developer's speed will only exasperate the problem.

Optimized developer productivity without simultaneously optimizing the entire pipeline is local optima.

Developer Productivity Myths

There are a number of tools and libraries that are sold under the "Developer Productivity" banner. These tools actually deliver developer efficiency rather than developer productivity.

Unless a tool's productivity proposition takes local optima into account, it flirts with negligence. If it does so knowingly and willfully, then it flirts with corruption; verily stealing value from its users and customers.

Consider Microsoft's re-interpretations of "Rapid Application Development" that drive its developer tools design:

Microsoft's Visual Studio enables developers to create input forms and data processing visually with little expense of time and effort. Developers get this work done very quickly, but the resulting systems features are inordinately difficult to test. Microsoft also packages tools that are specifically geared for "testers" in a separate product package, perpetuating the conclusion that doing work and proving that the work is right is the jobs of different people, creating the handoff boundaries that invite unproven work to collect as piles of costly inventory.

While this allows developers to be very efficient in the work of creating the code and artifacts that go into a feature, that efficiency isn’t realized as productivity. The software designs created by developer efficiency tools are unnecessarily and excessively difficult to test. This discourages developers from preventing the defects that pile up in front of the testing work center which also steals productivity from testers, exacerbating the problem of basing today's work on last week's unproven decisions.

Software organizations who are seeking and realizing higher productivity - that is, they produce end product to business sustainably and timely - come to understand that the inventories built up around handoffs and segregated work centers must be decreased and ultimately eliminated in order to really reclaim lost productivity. This has a profound effect on the shape of software teams and organizations, and in retrospect we come to see that our beliefs about organizational mechanics and culture were what kept us trapped in superstitious dark ages of software development productivity.

Regardless of whether tools like Microsoft Visual Studio (among others) seem impressive on the surface, failure to assess the effects of these tools on an entire pipeline will inevitably lead to mere momentary efficiencies and local optima. Worse yet, they will obstruct the learning that does in fact lead to productivity that matters, and will pull managers away from valuable work like teaching and facilitating and force them to become inventory managers and expediters.

It's not just the younger teams using Microsoft's visual tools who are subject to this problem either. Even advanced "agile" teams indulge in developer efficiency local optima like the hypercoding enabled by a higher order of developers tools. In some cases, hypercoding has even motivated urgent and wide-ranging changes to frameworks to optimize for a tool's use even in the face of compelling evidence that more meaningful sources of productivity are likely elsewhere. The compelling productivity of Ruby on Rails programmers despite the unavailability of extensive hypercoding tooling is a good example.

There's no one true answer to whether a tool is going to contribute to productivity. Sometimes just the creature comforts afforded by a tool are significant sources of productivity. But then, not all indulgences in creature comforts can be considered productivity enhancers either.

Ultimately, any of these tools can be used in a balanced, leveled software development workflow. However, until a meaningful understanding and representation of productivity takes root in software development, any team at any level of maturity is going to trade productivity that matters for mere efficiency.

Fixing the Problem

We can affect the things we control. When software projects go awry, we exert control. Any time we exert control without considering the impact on the entire fire brigade of software development workflow, we're going to create efficiencies at the expensive of our ability to produce.

The further away you are from the whole of the team and the workflow, the less likely you are to exert control constructively. As a senior manager, you might prescribe a suite of "Developer Productivity" tools after seeing a compelling presentation from a vendor that is specifically geared to affect your sensibilities from your perspective. If you're a developer, you might convince your team to adopt a "Developer Productivity" tool that demonstrably makes you a more efficient coding machine.

The decisions made by people who are too far from the whole are often a coin toss. There's little telling whether a team will perform better in getting products into the hands of the people who need them, and it’s often difficult to connect the decisions with the ultimate outcomes.

If you're encouraging or enforcing a team organization that disconnects the work and workers from the validation of their work, whether analysis work, design work, construction work, construction inspection work, packaging work, installation work, or operations work, you will inevitably create the conditions that encourage inventory build up and the subsequent obstructions, rework and general degradation of productivity. The shallow pursuit or mere localized efficiencies are more likely to happen when work centers fail to be shaped to productivity goals.

Making any of a number of mistakes that trade local efficiencies for productivity not only degrades productivity, but creates a cycle where the degradation accumulates, leading to the typical software cost curve that is ultimately a reflection of the degrading productivity curve.

Fixing the problem isn't trivial because no single local optimization will have a predictable effect, and a set of disconnected local optimizations degrade productivity even faster and even more unpredictably.

There's no doubt that we need to act on all levels, and ultimately this means decomposing the problem and working on different levels of an organization an at different work centers. But the local things we do to fix the problem have to extend from a holistic understanding of the software development system.

If you want to fix your software development system, find the problems with the system and then understand how the parts contribute to the problem. Evaluate the success of each effort to fix the parts by the effect that it has on the system.

Developer productivity can just as easily be a reality as a myth. Developers are obviously significant contributors to producing software. But an approach to "developer productivity" that isn't also an approach to organizational productivity is often not likely to do more than transfer value from your organization’s treasury to the coffers of a vendor who is more than happy to assume ownership of your precious resources.

We can’t get software done without software tools – this is true - but choose wisely. Your whole team’s productivity is at stake.

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at

Wednesday, July 08, 2009

Relearning: The Productivity Problem that We're Not Supposed To Talk About

Imagine that you had no memory; that everything you learned had to be re-learned again and again as you did your work. If you worked in software development, you wouldn't have to stretch too far to imagine it. Re-learning is so much a part of the moment by moment work of software development that it's considered normal. In fact, it's the unrecognized backdrop that software development plays out in front of.

Because relearning is not recognized as a problem as software development, it's almost never talked about it. And frankly, it's not a welcome subject in polite programmer society. Nonetheless, relearning accounts for a lot of unnecessary lost productivity and waste in software development. It can account for wide swings in productivity loss on most software teams, affecting the manageability of software projects and a truthful and meaningful assessment of programmers' real abilities as designers, analysts, problem solvers, and real contributors.

I'll offer an observation from my own experience and from the anecdotal evidence of others in my professional circle: I would be comfortable saying that half of the lost productivity on software teams comes from re-learning. It can come in the form of poor testability - the inability to easily provide proofs that expectations are met - and it can come in the form of source code that can't be understood at a glance. Even teams who have solved the testability problem still largely suffer from the lack of usability of their code, and the inevitable mass of relearning that comes of it.

First We Scan, and then We Read

Text in a text editor is interactive media; subject to the same fundamental usability principles that apply to a web page, a desktop app, or even a billboard on the side of the highway.

Before users read the content of interactive media they scan the content. Programmers scan
code before they read it. This is the singular human behavior that programmers consistently fail to recognize. It's the fundamental behavior that when recognized, becomes one of the pillars that code usability efforts can be built upon, and the starting point for recouping losses from relearning.

A singular focus on readability in code is a concern that, while encouraging, still misses the point. It's the same point that programmers missed as a human-computer interaction industry grew from the ashes of our continued failed attempts to create productive user experiences.

Usable code dissolves into understanding at a glance. It doesn't need to be coerced into understanding. I like to call this kind of code "soluble" code for the image it brings to mind of program text readily dissolving into awareness and understanding. Soluble code is likely readable code, but readable code isn't soluble unless it's written to be soluble.

Reading code isn't like reading a good article, where you start at the top and read to the bottom, enjoying the experience and being fulfilled by it. Granted there is beautiful code in the world, but the typical reasons for reading code are not the reasons that we read articles, books, papers, and the like.

In fact, the nature of the media that this article is published in, and the context that this media is typically consumed in, is such that you are more than likely to start jumping around the text with your eyes and with your mouse, looking for the nuggets and pearls while avoiding having to consume the entire thing linearly as it is written.

The first thing we do with code is ascertain whether it is indeed the code that we need to be working with in order to accomplish whatever task we've taken on. This even happens when we're reading code for the pleasure of it. The first eyes-on experience with some code usually involves rapid scanning of the code to ascertain if we're in the right place; if
we're at the worksite. Much of this happens pre-cognitively as it does with any media that we've been called to act upon. First we scan to get our bearings, and then we read.

We take high-level structural scans, followed by smaller, more detailed scans, followed by reading. If the code that we're scanning is written so that it only yields its meaning from a detailed read, then not only are we not optimizing for the natural human interaction patterns, we're also not taking advantage of providing the incidental knowledge that can be yielded to someone while they're scanning. This incidental knowledge accumulates and becomes a significant part of the material invested in the metal map of a codebase that a programmer accumulates through exposure to code that yields its meaning at a glance. We retain a codebase's textual geography only when we absorb its meaning.

If we don't code for solubility, we force programmers to take detailed reads in order to tease knowledge and understanding from the text. Forcing these detailed reads doesn't lead to greater advantage down the road. Less meaning is retained when forcing detailed reading into a context where a programmer is instinctively trying to scan.

Scanning is as much about a process of elimination as it is a process of accumulation. Both are happening at once during scanning unless code isn't amenable to these processes. If we fail to take advantage of scanning by failing to write soluble code, we're stealing productivity from our team mates, and from ourselves. On the surface, this seems like a negligible issue, but lack of solubility is responsible for a tremendous amount relearning and degraded productivity.

Solubility as a code style permeates a codebase. It's a pervasive quality; it has a constant, pervasive effect. Small improvements add up to significant advances when they have a pervasive effect.

Making Code Soluble

There are no cookie cutter patterns for making code soluble. Some things are obvious: meaningful symbol names (class names, method names, variable names, etc), and higher-level methods that tell the story of some process that call lower-level methods that contain the details. Both of these go a long way to create soluble code. Soluble code yields its meaning immediately.

The particular refinements that a team will make depends on the team, the product being built, the technology, and a host of other conditions. Pushing further for an canonical definition of soluble code patterns might lead to just as much productivity loss as productivity gain. Further practices, beyond meaningful names and anecdotal methods, are contextual and shouldn't be dropped into the code as if they are interchangeable parts.

Soluble code is an experience like the table of contents in a novel. It offers and allows multiple levels of reading, with each deeper level yielding ever greater detail. A table of contents is an affordance to the reader that acts as a navigational aid, or map, of the text. But a novel isn't program code. A table of contents at the beginning of a novel can be sufficient for that kind of media with its implicit user experiences, whereas program code itself must be it's own table of contents, right down to the very small, five-line, composable methods. The user of a novel is typically well-served by a table of contents whose resolution goes no further than mapping out chapters, expecting the reader to start reading linearly from the beginning of the chapter. But this isn't the case with program code.

When we're scanning, we're mapping the code by the what of the code; what the code does, what it's responsibilities and behaviors are. Soluble code allows a reader to immediately understand what is does before forcing the reader to understand how it does it. Soluble code serves both modes: scanning for what, and reading the how. Code that isn't styled this way largely deprives a user of the process of elimination, the incidental knowledge, and the mapping that can be had from scanning.

One of the worst mistakes that programmers make in writing code is in failing to recognize that more productivity will be spent over the life time of code navigating through the code than will be spent writing the code. It doesn't take much more effort to write soluble code that serves the elimination of relearning. Choosing to not write soluble code means choosing to keep the relearning waste well-entrenched, but there's more to this problem than mere choice.

Resistance to Code Usability

There are reasonable objections to styling code for solubility and usability. The how
of the system ends up spread over many small methods rather than concentrating it in fewer, larger locations. The root cause of the aggravation is often not that the code is factored in
small semantic units, but that the semantic units are not the right ones, or that the factoring is just not good. Yes, this is the you're not doing it right response, and as unfashionable is this response is, it can nonetheless end up being the root cause. Nonetheless, some programmers are just not going to want to get used to soluble code style, and there will be inevitable grumblings from people who prefer to use more traditional, procedural structure. It's not hard to bring usability to code, but it can be discomforting at first - like transitioning to a new programming language.

Programmers aren't traditionally the folks on a software team who have their heads in the usability game. And we've gotten to an unfortunate point in programmer culture where the answer to many subsequent problems with programmer ineffectiveness has been to create further specializations and allow programmers to be responsible for a narrower and narrower set of expectations rather than deal with knowledge problems as organizational and cultural problems.

Expecting programmers to be considerate of code usability creates friction. For many programmers it's going to be as comfortable as thawing out frozen, front-bitten fingers. But it's not just a programmer responsibility. A good chunk of the responsibility for change rests squarely with the surrounding and supporting organization and its protocols and mechanics. There's more to rehabilitating software development productivity than introducing programmers to new coding pattens. Organizations that have a commitment to learning cultures will do much better at this.

The staunchest resistance to efforts to reclaim lost productivity due to relearning will come from hero programmers. Hero programmers are those guys in any organization that can get the job done with any code in any state. They are typically blessed with what seems like a supernaturally high-definition mental map of a codebase. This is their best and worst quality. It's their best quality because they often know where to fix a problem in a codebase and have a reasonable grasp of the myriad side effects that might result. It can be their worst quality because it's often an effect of mild Asperger Syndrome common to programmers, engineers, and jobs that require extended, intense focus. It's often accompanied by the lack of awareness and empathy toward peers typical to the condition.

Hero programmers can suboptimize the efforts of a team by not having to rely on soluble code, often navigating a codebase entirely from memory. The ability can be incredibly useful, but the need to write soluble code rarely manifests because it's not a personal need, and the typical lack of empathy obstructs the ability to recognize how this advantage undermines their team mates' efforts to be as effective.

The code created by hero programmers isn't soluble because the heroes rely on uncommon facilities that preclude the need for solubility. They don't notice the design smells because they rely almost exclusively on echo location for navigation. The resulting work product can often only be as effectively worked on by the heroes themselves, which inevitably leads to predictable resource bottlenecks, bus factor risks, excessive specialization, and the general malaise on a team as the effects of code toxicity spill over into the human realm.

The analogy to the baseball shortstop who was famous for making great plays applies: his coach often pointed out that he was out of position to begin with.

Usability is rooted in an ability to have empathy for users. That empathy gives designers the pause to consider whether they got the user experience right. It's the empathy that leads to the questioning that leads to the recognition of interaction design problems in the form of cognitive obstacles that undermine the user's productivity. In an environment with traditional and institutionalized lack of awareness and lack of empathy, dysfunction can drive unconscionable waste.

Hero programmers rarely stop to question whether they've left behind a good experience for others on the team who need to navigate, then understand, then make changes to the code. And most programmers will fail to recognize the two distinct mindsets that are in play when working with code.

Writer's Mind and Reader's Mind

Without ever doubting whether code is usable, programmers will presume that the right thing has been done. Programmers who are good at creating soluble code have learned to doubt every line of code written. It's not that they have all of the answers. More importantly, they've learned to be constantly questioning.

In the words of the anthropologist Claude Levi-Strauss (not the guy who invented blue jeans), "The scientific mind does not so much provide the the right answers as ask the right questions."

Asking the right questions means constantly switching from the writer's mind to the reader's mind. That means that after each bit of code is written, a programmer switches mental contexts and assesses the code from the perspective of someone who has never seen the code before, asking, "What have I done to undermine the immediacy of someone else's understanding of my work? What unrecognized presumptions have I made about their context as a reader that only applies to my context as a writer?"

That's quite a trick, and it's not uncommon to hear the complaint that it can't be done, but it's what interaction designers do all the time. It's not uncommon for the deleterious effects of programmer autism to cause programmers to fail to have awareness sufficient enough to break out of the laser-like focus on writing code and switch back into questioning.

Code that doesn't incur the cost of relearning is code that can be immediately understood by someone who hasn't seen it before with minimal orientation to the code and the problems it solves. It's code that can be understood at a glance. That code is rarely if ever produced by a mind that has lost its awareness of its context and its mode. It isn't produced by a mind that fails to concede that code is written to be read, that the readers are other humans, and that the reader's context and needs are not the context of the writer - at lest not until the reader has found the worksite in the code, and gained sufficient understanding of the worksite to begin to make the necessary changes that they're tasked with.

The writer's mind is a context that often fails to recognize that the focused and relatively linear mechanics of writing code is quite different than the mechanics of consuming code as a reader. And this is where the misconception over readability comes from. The writer's mind, working relatively linearly is also consuming code in that mode. Readability is a quality that pertains to the linear consumption of code as text, and that kind of optimization of experience is only relevant some of the time in some user scenarios.

Break the Habit

The autistic mind is lulled by the hypnotic cadence of constantly pumping out code. It will complain that the constant switching between the writer's perspective and the reader's perspective is ruining their ability to get in the zone. And in truth, it is, but it's that particular zone that is causing a lot of relearning debt to mount up. There are other zones to get into with equally pleasing effects, but it is a matter of breaking some habits and replacing them with new ones. There are a few tricks and techniques that programmers can use to break out of the fog and get into the zone.

The most important thing to practice is the constant self-questioning of whether the code just created is soluble; that it can be understood at a glance and yields enough meaning while scanning to contribute to the mental map. Instead of presuming that everything I do is made of gold, I presume instead that it's made of fools gold. From that perspective, I can usually gain the right amount of objectivity to assess solubility.

Pair programming and test-driven development are two techniques from Extreme Programming that are extremely effective at clearing the fog. When these techniques are practiced together, it becomes very difficult to be lulled into the unexamined mind that produces unexamined code.

If you don't like pair programming, try using some kind of timer on an interval that is just short enough to be uncomfortable that will remind you to come back up for some critical thinking.

And lastly (for this article anyway; there are more tricks out there), Context Specification is a form of Test-Driven Development that recognizes solubility, usability, reader's mind, and authorship, and forces the issue of contextual analysis to bring more practice of awareness into programming.

Commit to Learning

Ultimately, there's no where left to hide. The anti-productivity that comes from inviting and accepting relearning continues to accumulate. We've pushed it into the lowest level of software development where only programmers can (or may) see it, but it still affects everyone touched by the software project or the product.

Relearning waste is a fundamental organizational behavior. Until we shape organizations around dealing with waste and learning, we continue to fail to see in the range of the spectrum where the sheer magnitude of the relearning waste is visible. Relearning is inculcated into software organizations. Shifting from re-learning organizations to learning organizations and learning culture is how this problem is ultimately solved.

It's self-evident: counter-act relearning with learning. The term learning organization isn't the trite and trivial perspectives that see learning as something external to team and organization; a destination where people are sent once in a while to be "trained". Learning is no more about receiving training than quality assurance is about giving click recorders to test monkeys. The emphasis we put on "training" in the software industry undermines our ability to see the "learning" side of the same issue, and to build truly meaningful teaching/learning experiences in our organizations, and to subordinate organizational mechanics and protocols to these imperatives to counteract the constant, tireless forces that drag us back into relearning.

The value we throw away on relearning is recouped when we counter it with meaningful learning. Learning gets real when it's expressed in every organizational protocol and business process, and importantly for software development, when it is expressed in every single line of code written by everyone involved in bringing a solution to life.

Friday, July 03, 2009

The Problem with Big Design Up Front is the "Big" not the "Up Front"

The risks of Big Design Up Front isn't the "Up Front" part, it's the "Big" part. Doing too much design without validating it inevitably drives a good bit of the productivity loss that continues to hamper software projects.

It's an issue of large batch sizes - the "Big" in "Big Design Up Front". The larger batch sizes mean that we're always building today's software on yesterday's work and yesterday's decisions before proving that yesterday's work and decisions are sound. The larger the batch size, the larger the risk that the incorrectness of design will lead to ever more expensive countermeasures. And in many cases, design flaws are too subtle to be seen immediately, and collect negative potential energy in the form of on-going degradation in productivity that come to be seen as "normal".

Big Design Up Front has often come to be interpreted as "No Design Up Front", and has led to a lot of mediocrity in design by inappropriately democratizing the responsibility for design quality. This is often done in the name of cross-training, but the best way to teach potential software designers to be good software designers is to constantly expose them to good software design, rather than the mediocre design that can come from a misinterpretation of Big Design Up Front.

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at

Designing the Work

More than just designing the software, technical leadership must also design the work of implementing those designs. Taking the shape of the work into consideration along with the shaping of software modules serves the goal of predictability that comes from leveled production, and the awareness of trouble spots. Designing the work serves human and organizational concerns, and predictable manageability.

Decomposing requirements, be they user stories, features, or what ever you use in your process, is inherently a design activity. Division of labor follows the division of software modules, and the division of software modules also follows the division of labor. These design activities have to be done in consideration of each other, likely by the same people and at the same time.

The design should be communicated to the team, and at the very least, to the members of a team who might be tasked with the work, but it isn't necessary to have the whole team involved in creating the design. The best designers should be involved in doing the design work. This should go without saying, but some of the interpretations of "self-organizing team" can contribute to obscuring the obvious: individuals on teams do indeed have strengths, and some are stronger than others.

Communicating design and expectations is a great side effect of all-hands planning and estimation, but it can create poorer designs that are normalized to the average skill on the team while the team learns how to design software. Becoming a competent software designer takes many years of work and the uncanny circumstances that create the precursors of design awareness that, when complimented with experience, creates the talented and mature abstractionists, analysts, implementers, and engineers who lead design efforts. Planning and estimation and teaching and design do share some common ground, but teaching design and growing designers should be an explicit goal with specific work, rather than a magical side effect.

Decomposition, task breakdown, and scheduling are all aspects of design. When we communicate design to the people doing the implementation, we're setting expectations for their work. When we set expectations for people, we (hopefully) activate their critical minds. They inevitably cross-check a work plan and inevitably cross-check the design as they learn about the plan. That's not exactly the same thing as the design-by-committee exercises that are too frequently the result of consensus estimation design, where decomposition is often done as part of whole team sessions that are concerned with dialing in agreements on story points.

Designers front-load the design, bringing their experience and aptitude for design to bear, and provide structure for the ensuing conversations. The tension between the design and the cross-checking can lead to new insights. It can lead to changes in the scheduling of the work and it can lead to changes in the design, but it's not a consensus-based free-for-all that leads to the mediocratization of design. The front-loaded design done by competent designers produces a more balanced equation, especially when they are also designing the work.

This isn't Big Design Up Front or phased-based SDLC though. Designing the work simply means that there's necessary front-loading involved in producing software. Front-loading doesn't preclude inspecting and adapting, responding to change, or emergent design.

The balance between good work that can be continually built upon, work that creates lower standards of productivity, and work that can be managed to expectations hinges on more than good software design. Software development productivity lives at the intersection between well-designed software and well-designed software work. Agile estimation is a good start down the road to manageable software development. Designing the work closes the gap between the beginnings of manageable work, and work that can be managed.

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at

Monday, May 25, 2009

Flow, Leveling, and User Stories

Flow and leveling are two sides or the same multi-sided shape. The goal is an understood and controllable cycle time. Without leveling, sustained flow is unlikely. Without flow, improvement is less observable and manageable. Improvement efforts might be rooted more in medieval software superstitions than something closer to a science.

David Anderson talks about the core of Kanban as an agreement that the team has a capacity; that they can only work on so much at a time; and a limit set around that work. Implicit in Lean work management styles is the right-sizing of work items. Reducing the broad variability in work item sizes is a part of the work that goes into flow and into controlling cycle time.

Typically, in a method like Scrum, a team organizes its work efforts around user stories. The user stories that are worked on during an iteration (or sprint) can be any of a number of sizes. Using the Fibonacci estimation technique the variation in story size can be almost infinite.

User stories are a way to communicate expectations, but they're a less effective way to manage work and improvement because of the amount of variation allowed in user story sizes. You could decompose stories into smaller stories so that they're more manageable, but then you'd risk fragmenting the communication value of stories by breaking them into separate chunks that might not carry as much context as the original larger stories.

User stories can be of variable size because they inherently are of variable size. Let user stories be the size that makes sense for the kind of artifact that user stories are: an artifact intended to communicate context and expectations to the development effort. Use right-sized work items to manage work. Decompose user stories into work items and schedule those work items for development in way that takes both customer priority and technical constraints into consideration.

The work items can certainly be linked back the stories that they are decomposed from, but it's the work items rather than stories that go through the development pipeline.

I've sat through enough all-hands Agile planning and estimation sessions to know that while this practice is useful for bringing a new agile team together, it's often a terribly ineffective way to establish design. It's a good way to socialize design, but there are alternatives that don't imply the wastefulness of traditional Agile planning and estimation. Using design as a team-building exercise can put the design at risk.

In the worst cases, the all-hands planning and estimation that is common to Agile development methods can hurt the team. It's just as likely that adversarial conditions can break out in the team if an expectation is set that suggests that product design and implementation design are egalitarian and democratic within a team that is necessarily staffed by people of varying experience and ability.

Design should be done by those who are most capable of doing design. Story decomposition should be done by the person or people who are best equipped to do that work. This doesn't preclude the socialization of that design to the team and the proving of that design by the team.

Rather than all-hands estimation and planning, design and planning can be done in short, effective sessions with the team's designers. Since designers are decomposing to work items that are intended to be of a similar size, the team no longer estimates work, but reviews the decomposition and seeks clarification and raises concerns. The collective intelligence of the team is still leveraged, but the potential waste and ineffectiveness of traditional Agile planning is avoided.

The planning and estimation activities change from larger-scale chunks of time, effort, and team participation to smaller units of tiered work and cooperation that can be done just-in-time rather than on those specific days of the week when the whole team's attention can be coordinated and directed to a fixed-schedule meeting. Consequently, any all-hands team meetings can be scheduled on an as-needed basis as well, eliminating any waste that comes from institutionalizing all-hands meetings according to a fixed timebox.

With planning, design, and scheduling being done in smaller units on a just-in-time basis, some of the fixed time boxes in the development process aren't needed. Features can be queued for packaging and deployment on an opportunistic basis as well. There may still be some fixed rhythms at-play in the effort - customer demos and major market events, for example - but these synchronization events don't have to leak their scheduling mechanics into aspects of the work that aren't dependent upon or necessarily coupled to those rhythms.

Work items aren't just same-sized, they're also small-sized. The larger the work item, the more variability there is between the estimate of the work and the actual work. We understand larger work items with less certainty than smaller work items. That's because decomposing work into smaller items means that we have to think about the actual units of design and work that go into delivering those bigger features. Because we're trying to achieve flow in pull-based systems, the variability inherent in larger work items is a likely obstruction.

The specific amount of work that defines what "small" is depends on the kind of work, the team, and a handful of factors. And the actual size of "small" will likely change over time. In some cases, it might just be impossible to get all work items to be of similar size. Consider whether the work can be scheduled so that groups of similarly-sized work items can to be done together.

Without leveling, flow will continue to be difficult, and Kanban a Japanese word that has come to mean "story wall".

Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at