Wednesday, December 31, 2008

Productivity: It Comes from Software Design Rather than Software Tools

There was a time when big challenges in software development were mostly solvable by tools. It was a simpler time, before every desk and every home had a computer, and before most mid-sized companies were creating their own custom software and before many small companies would even consider building custom software.

It was a time that has come and gone, as have most of the tool vendors of the time. The tool vendors that are left standing have one lasting imperative: continue to make the issue of software development productivity an issue of tools.

Software productivity and quality have been suffering for it with a constancy that should have shocked business leaders into action years ago. Something has gone horribly wrong.

A reasonable and rational business response to poor productivity and throughput in software creation and operation should have already come. It should have been swift and decisive in proportion to the hemorrhage of opportunity and value in custom software efforts.

And yet, business leaders sit, passively resigned to what appears to be an intractable cycle - a notion reinforced throughout IT - that under-performance is the inevitable nature of software production and there's nothing that can be done about it. In the ten or fifteen years since the onset of the ubiquitous custom software boom, this sad story has persisted as the blindly-accepted lore that influences so much about what we believe about the ways and means of custom software production.

The reasonable and rational adverse reaction to software woes isn't happening as it should. We are resigned as a society to bad software and software projects that are even worse. We accept the pain that software developers rain down into our lives, whether in the form of custom software, commercial applications and operating systems, or indecipherable web sites.

And yet, the software crisis isn't really a hard problem to solve. The rehabilitation of software development and the reclamation of IT's reputation and credibility starts with the recognition that dramatic shifts took place while decision makers' eyes were necessarily taken off the ball. We're presently trying to solve problems with solutions that worked at one point, but no longer work. We're directing a generation of software developers to think about software development in ways that are out of date not merely by years but by generations.

Here's a short list of heretical ideas that has changed the game for every software developer and software organization that has successfully put them to work:

  • Design quality is the most important factor influencing productivity in software development
  • The things that obstruct quality degrade productivity
  • The reductions in productivity over time that are typical to tool-driven software development are greater than what can be solved by tools
  • The application of tools to these problems exacerbates the quality problem, creating a vicious cycle that accelerates exponentially
  • Quality software design is the product of principles and practices, not tools
  • The typical degradation in a software's quality over time isn't due to the nature of software, it's due to the nature of the approaches we choose to develop and operate software

We still need tools to support our software efforts, but when we return balance to software development, we find that we need fewer elaborate and expensive commercial tools. Looking back, we see most of the commercial tooling that we use as distractions, wasted capital, and excess.

The essential tools that we do need are far less expensive to acquire and to operate than the elaborate commercial tooling offered as solutions to the software problem. In many cases, essential tools are free, open source tools built to fill the vacuum left by entrenched tool vendors who cannot keep pace with the changing conditions and the evolution of software development in the wild. These tools are typically more mature than their commercial counterparts, and are supported by engaged and engaging software professionals, and they are crafted by people who do in fact use these tools on live-fire custom software projects.

It should go without saying that we should be wary of commercial tool makers who use their tools only to make other commercial tools. These tool makers typically have little to no recent relevant experience in the projects that they believe their successive generations of tools will be appropriate to.

Productivity degrades because software becomes harder to change as its complexity increases. Complexity increases naturally as more code is added to a system over time, be it code for brand new features, or for changes to existing code due to improvements, changes in the business, or due to fixing defects. There comes a point (and it comes quickly) when design flaws become so entrenched in software that they can't be resolved affordably.

The whole trick to tactical software design is an exercise in protecting a very volatile investment from erosion. And this means an unwavering vigilance toward design fundamentals and principles.

Deliberate and tactical software design keeps productivity bottlenecks from taking root in software. With bottlenecks, obstructions, and design friction in play, it becomes harder to move forward. Work items progressively take longer to complete, and one day the productivity is so poor that a total re-write is ordered. Typically, the new system is re-written using the exact same approaches that generated the conditions that lead to the need to re-write the system, and the vicious cycle begins anew.

The traditional productivity curve (which is the inverse representation of the traditional software cost curve) is a result of not protecting software from erosion. The exponential degradation of productivity shows the compounded effect of institutionalized negligence of the software development work that is in fact specifically geared to fend off erosion. Namely, continuous, incremental, design quality stabilization and improvement.

To believe that the traditional productivity curve is a natural part of software development is to indulge the same naive presumption that american manufacturers believed to be a natural law of production before Toyota showed that the presumption is tied to a specific production methodology - a methodology that still forms the basis for most software production methodology used today. If you fundamentally change the methodology, you'll change the rules and the equations that govern the productivity curve.

Some software designs are harder to work with than others. Some designs are even more prone to defects. If you arm your software organization with even a basic understanding of the fundamental design principles and the basic practices engendered by them, you would begin to see benefit immediately. Over time you could reshape the productivity curve entirely, creating more value with your investments in custom software, and deriving value for much longer.

But there are no tools that can do this for you. Not even the so-called software design tools are capable of helping you to apply fundamental software design principles. There are analysis tools that might help you understand where trouble spots might be, but they can't create supple design for you. To defer the responsibility of productivity to tools means that the real issues underpinning productivity will not be addressed, and not only will your desired productivity not be achieved, but you'll experience the complete opposite of what you had hoped for.

There will likely always be commercial tools that are used by software developers, but the real, essential need for these tools is far less than the excesses that we see today in vendor-dominated software development communities and cultures. We're on the cusp of a new era of productivity in software development, but it has very little to do with material investments in tools and everything to do with investment in mature, proven design principles and practices, and the readily-available, low-fi, essential tools that support them. And to this mix we add only the essential commercial tools that support our efforts.

The principles, practices, and yes even the necessary tooling to rehabilitate software development and to put an end to the crisis are already here amongst us. Many software developers have already reached out and harnessed them, and more are waking up to the essence of effective software development every day. Armed with the understanding that productivity comes from design, and that design is an intellectual activity that has only a slight dependency on tooling, a growing number of software developers are making huge strides in proving that our assumptions about traditional software development economics add up to little more than superstition.

One day this period in software development history will seem like the dark ages - a time where our primitive approaches to software development delivered commensurately primitive results. We'll look back and scoff at the now obvious mistake of serving the business needs of tool vendors rather than serving the business needs of the businesses we work for. We're under-performing to a shocking extent and hemorrhaging value at an alarming pace. This bleakness ends when we recognize the real source of productivity and reach out and grab it. The productivity we get from tools is merely a distant fallback position from the productivity that we achieve through software design.

To continue to languish at the mere levels of productivity that tools offer is a deeply-disturbing yet deeply-entrenched behavior in software development. As more software developers and organizations wake up to their true potential, these software dark ages can finally be relegated to history and we can move forward into a renaissance. Hopefully we can do it during this generation rather than continuing to be distracted by the endless, well-funded parade of software tool peddlers whose disproportionate success depends entirely on our willingness to remain distracted from our rightful potential.

Sunday, December 28, 2008

Nothing Fails Like Success - Why Continuous Improvment is Continuous

Without the surrounding and supporting, end-to-end learning organization, an encapsulated team - even if it has begun to turn itself into a learning culture - will sooner or later begin to under-perform. It may even degrade to a point of becoming largely ineffective, leading to re-organization or disbanding.

Continuous improvement is the goal of a learning culture. Like the mechanics of a learning culture, the mechanics of continuous improvement isn't an ad hoc series of suggestions from on high on potentially better ways of doing things, or merely random trial and error acted out by workers in place of dealing with higher priorities.

Continuous improvement is a managed process. Improvements are done with the aim of creating systemic optimizations. The terminology from the Toyota Production System is, "optimize across organizations." Making local improvements without considering their impact on the whole system is hacking rather than improvement.

Each improvement is a change. It creates a new set of conditions. It changes the system. At a fundamental level, improvement creates a new system, albeit with a great number of similarities to the previous system.

The conditions created by a previous improvement aren't perfect. They're likely better than the previous conditions, but they are also inevitably the pre-conditions for the next improvement.

Conditions created by an improvement that create the potential for the next improvement typically aren't predictable. We have to make an improvement and then live in the conditions that result, observing them, remaining vigilant for the next improvement, and being watchful for undesirable local optimizations.

Each success creates the next set of conditions that, if not dealt with, can become the next failure. This cycle is why improvement is necessarily continuous.

Saturday, December 27, 2008

Learning Culture

Adopting Lean Product Development of Lean Production is a commitment to turn your organization into a learning organization.

In the Harvard Business Review article, Decoding the DNA of the Toyota Production System (October, 1999), Steven Spear and H. Kent Bowen attribute western car makers' failures to replicate Toyota's successes to not misunderstanding the unspoken essence of the Toyota's nature and the TPS: Toyota is a learning organization.

The authors describe Toyota's organization and culture as a "community of scientists," from the assembly worker to the executive suite. Western manufacturers vainly attributed their inability to apply what they had learned from visits to Toyota City in Japan to cultural differences between Japanese people and western people. The root problem was indeed cultural difference, but likely not cultural differences rooted in national identity or heritage.

Martin Fowler loosely asserts that is lean is agile and vice versa. For the most part, and within the bounds of the essay, the assertion holds up. I tend to see Martin's analysis more specifically as something like, "The set of qualities that comprise Agile are also present in Lean." The inverse of the assertion, "The set of qualities that comprise Lean are also present in Agile," isn't as resonate, and possibly not very accurate - at least not from the perspective of a mainstream understanding of agile as it lives and breathes here and now.

Billy Hollis, a Microsoft community personality, recently complained that pair programming, the most visible approach to cross-training in agile software development, is an "inefficient way of mentoring." In this observation, Billy is absolutely right. Unfortunately, Billy is talking about agile from a non-practitioner's perspective of a vision of agile as it has become in the mainstream, years into the detrimental effects of the monetization of Scrum training on agile practice, culture, and lore, and the outright hostile disinformation efforts of tooling vendors that initially couldn't move fast enough to meet agile's uptake.

Pair programming isn't an effective way of mentoring, but pair programming isn't a whole mentoring practice, regardless of whether the nascent, disproportionate mid-part of the bell curve continues to assert it as such. But then, even outside of agile, and in the software world at large, meaningful mentoring is usually pretty shoddy at best, and mostly absent or just naive. Pair programming could be a component of a mentoring practice, but it's far too limited to be thought of as mentoring in its entirety, and I seriously doubt that experienced, veteran practitioners see it as such.

Scrum thought leaders have always stated that Scrum works when it's approached as an organizational transformation. And while the organizational transformation terminology is often pretentious posturing found in talk bubbles hovering above boutique consultancies, the spirit of the advice is sound. Unfortunately, it's usually this part of Agile adoption efforts that gets disposed of and disregarded first, dooming Agile efforts to the mediocrity of the mere western style process improvement efforts that Steven Spear and H. Kent Bowen called out when reflecting on the Lean Manufacturing adoption failures of the 80's and 90's.

Despite what is apparently a much better recent track record of companies in the west in adopting Lean, Toyota continues to out-perform it's western counterparts - as is evidenced quite dramatically in the effects of the global economic systems failures on car makers. A good part of its success is due to it's community of scientists organization. Toyota is a learning organization built around and to support a learning culture.

While what has become colloquial Agile asks for a few organizational tweaks - embedded customers, team rooms, adaptation, emergence, etc - it rarely goes so far as to insist that remarkable and lasting success is predicated on the forming and support of a learning culture within a learning organization.

A learning organization marches to a different drummer. It's protocols and processes are built to not only accommodate learning, but to support it and enable it.

A manager's mandate in a learning organization is to see to the advancement of the people he is responsible for. This isn't a secondary responsibility in a learning organization, it's a responsibility amongst the organizations highest priorities and imperatives.

You might imagine that the productivity and accounting models for an organization that doesn't let any opportunities for learning slip by may be quite different than your own organization's models. When every failure, accident, and oversight is dealt with as an opportunity for learning - for the betterment and advancement of a worker and the organization - our traditional western obsession with efficiency management would likely work at odds with the imperatives of continuous improvement.

Learning in a learning organization like Toyota isn't predicated on ad-hoc pauses to accommodate expressions of remorse for breaking a build. It's not a trivial commitment to workers to give them at least two weeks of company-paid training or conference attendance each year. Learning at Toyota is rigorous and scientific. And if a manager or leader can't demonstrate the proper way to perform a task to expectations, then that person would likely not be in a leadership position.

A learning organization doesn't just pause for bi-weekly retrospectives. It methodically seeks improvements. It uses scientific method to plan and execute experiments and measure outcomes. It learns from the rigor of experimentation, and it only experiments on work that has already been standardized and harnessed with standard measures.

If a worker in a learning organization has an idea about how to improve his work, his manager or leader has an obligation to teach the worker how to formulate an experiment that proves the workers ideas, capture the lessons learned, and communicate results to the rest of the organization. The manager or leader must not only know the worker's responsibilities, but also must be able to perform them so that he can guide the worker toward observable improvements that allow the worker to perform to new expectations.

I doubt that Billy Hollis had the Toyota DNA in mind when he offered his criticism of pair programming, but it's a good jumping-off point to start thinking about what it really means to have "mentoring" in software development. So far, we're failing quite dramatically in this area.

The Toyota DNA provides a basis for the establishment of not only a solid mentoring practice in software development, but if we adopt as naively into the software industry as western car manufacturers did before us, we should expect the same failures.

To Billy Hollis' astute observations about mentoring, Agile methods indeed don't necessarily provide guidance for "mentoring", but the entire foundational layer of beliefs about producing software and managing production remain seriously flawed and not just a little antiquated and culturally isolated.

There's nothing that I see in the software development industry that suggests that we're even close to realizing the potential and promise of the learning organizations in our midst. Scrum and agile aren't making the case strongly enough and these approaches aren't really coming to the table with the mature organizational disciplines that might make such change possible.

Agile can't be an answer to the mentoring problem. Agile's focus is much to limited - and necessarily and purposefully so. If we are going start the necessary changes at an organizational level that allow us to really take advantage of a culture of scientists, then we should be looking to methodologies that address organizations as a whole, and maybe even that have decades of experience behind them.

Scrum is arguably a whole-organization methodology, but it's not colloquially known or practiced that way. Lean is an organizational methodology, and if it's not introduced and practiced as such, it's prone to lackluster and possibly even detrimental results. As Martin says, you can practice Agile Development and Lean together. However, if you're committed to the Toyota-like results that come from learning organizations, you might find your best guidance from the methodology that Toyota created to shape its own success.

You can practices Lean and Agile together, and if you're a software organization, you probably should. But understand that the extent of your results will inevitably be a reflection of how far Lean has spread throughout your organization, and understand that Agile has more to say about how software is done rather than how accounting, marketing, logistics, and customer service are done.

Optimize across organizations. Do this in a whole learning organization. Situate your software effort in the midst of such an organization and culture. See the holistic system and organization that effectively addresses mentoring; uses the scientific method as a core practice; sees failure as a trigger for improvement; doesn't resort to recrimination in the face of failure but accepts readily accountability; and is driven by customer value first and foremost - no excuses or spin, and no allowance for the petty, self-interested, entitlement and bureaucracy typical of organizations that aren't yet learning organizations.

Think about what your company could do if its people were truly motivated and organized to success above and beyond the expectations of the surrounding, unprepared market. Can your company be the Toyota of it's market? What's holding you back from wholeheartedly applying what we know about systematizing an enculturating a pusuit and achievement of excellence?

Saturday, December 20, 2008

Lazy Loading Considered Harmful

Lazy loading is a common feature in object-relational data access frameworks. It can produce some really nasty side effects though, and can even be considered harmful. The pattern is considered so risky by some commercial vendors that it isn't included in some products.

Consider a class model that has a Customer class, an Order class, and an OrderLine class. In this model, a Customer is associated with many Orders, and an Order is associated with many OrderLines. The same model is reflected in the relational database, where the application data for this model is stored.

An object-relational data framework allows for order data to be retrieved in the form of an Order object rather than an order data row. This is a great benefit to object-oriented programmers since their environments (.NET, Java, etc) are natively object-oriented. They will inevitably be working with Order objects, and Customer objects, and OrderLine objects.

Objects retrieved from an object-relational data access framework with lazy loading can cause unintended queries to be executed against relational database servers, causing performance, scalability, and data integrity problems.

For example, if a programmer writes some program code that retrieves a Customer object, and then makes use of the Customer's reference to Orders, the data access framework will automatically query all of the Customer's Orders. If the customer has done a lot of business with the company in the past, it may have thousands of orders.

An entire application written on an object-relational data access framework that has lazy loading will have quite a lot of these auto-loading relationships set up between objects.

Serving hundreds of users with this application will put stress on database servers that can become unmanageable. In cases like this, it's beneficial to not use lazy loading at all. It's better to write program code that explicitly loads data when its needed rather than have these risky operations lurking in program code that may be hard to find due to their implicit, transparent, and seemingly innocent nature.

Lazy loading can indeed be harmful, and you can see why commercial vendors might not even include such unsafe features in their object-relational data access frameworks. These kinds of problems can lead to increased database server maintenance and operations costs, excess client licenses expenditures, and even costly business continuity problems in when the excess load put on database servers by lazy loading leads to outages.

Up to this point, this article has been a bit of ruse, filled with many misconceptions about object-relational data access that are unfortunately perpetuated by folks who tend to see object-oriented design through invisible relational data-oriented lenses. The argument is often used to dissuade the use of lazy loading, and has indeed been used by vendors to justify the exclusion of lazy loading from object-relational data access frameworks.

The crux of the issue is the assumption that a Customer object would have an association to its list of Order objects. It's likely not a reasonable way to build a class model for these objects. The Customer class would not have an association to its list of Orders.

Database modeling and class modeling have different rules. They are fundamentally different kinds of technologies, and so the fact that the rules are different shouldn't be much of a surprise. However, if you don't stop to wonder if these differences exist, you might just go ahead and shape your objects and their associations the way you shape the database tables and the relationships that you're used to.

A fixed association between a Customer object and its Order objects is an unnatural association. Although you can conceive of the association in real life, it's not an appropriate association for a class model.

Class models, like many kinds of information models, have natural partitions. The Customer class and the Order class are not part of the same partition. Putting a hard link between them, across their partition boundaries, isn't something that you would simply do without putting some consideration into the design, regardless of whether this association is in a database's data model.

Ironically, there are no hard links between relational database tables. The technology doesn't allow for it. Any conception that we have of hard links between the Customer table and the Order table due to a Customer_ID foreign key field in the Order table is merely a trick of the mind. It's a concept. A Customer row has no knowledge of it's Order rows. An Order row has no knowledge of it's Customer row.

The Order row has a copy of the value of a Customer ID in one of its columns, but that isn't a fixed association or hard link to the actual Customer row object in the database server's memory model. That foreign key value can be used to query the Order table to find the Customer's orders, or can be used to query the Customer table to find an Order's customer, but these data structures don't have fixed associations the way that objects in an object model do.

All tables in relational database data models are partitioned. This is just how relational databases work. When we conceive of fixed associations between database tables, we are merely overlaying our conceptual model on a technological model. We can even implement constraint logic in a relational database server to mimic our conceptual model, but none of this useful wizardry changes the fact that the entities in the underlying technological model are naturally partitioned.

Partitioning is a common technique used to reduce the complexity caused by the associations between resources. Every fixed association between two entities will cause a system itself to become increasingly fixed, or rigid, which makes it increasingly difficult to adapt to new requirements and reparations.

Partitions are found at all levels of a system's architecture - from distributed services off in the cloud, all the way down to the relational databases, and even the disk storage systems beneath them.

Pat Helland writes about partitions in his paper on infinitely scalable systems, Life Beyond Transactions. Roger Sessions talks about partitions in his Simple Iterative Partitions process for decreasing complexity in enterprise architecture. You can find this pattern everywhere in software. Once you realize that it exists, you'll see it everywhere.

Eric Evans writes about a particular partitioning pattern called Aggregate in Domain-Driven Design, Tackling Complexity in the Heart of Software. This pattern is useful in guiding the decisions you can make in designing a class model.

The Customer class and the Order class are in separate aggregates. There may be cases where this isn't true, but for the most part it's likely to hold true. Because they are in separate aggregates, I would think twice about establishing a fixed association between them. Without a fixed association between them, I no longer have the lazy loading risk.

I can still get all Order objects for a Customer by specifically querying the database through a data access class written to support Order data access needs for the application. This query is issued from within the higher level business scenario logic that might need a Customer as well as its Orders. In practice, this pattern covers the situations where you might have presumed a need to query the database for a Customer's orders via a Customer object rather than a data access object for Customer.

From within an aggregate, you might make use of lazy loading or eager loading depending on performance analysis and other empirical knowledge. For example, the Order class does have a fixed association between itself and its OrderLines. When an Order object is retrieved, if it is always necessary to retrieve the Order's lines, then the association would be eager loaded on the spot. If not, it would be lazy loaded - deferring the decision to load the related OrderLines till later in the execution of a business transaction if the Order's lines are referenced.

There are cases where you might not put a fixed association between an Order and OrderLines at all. It depends on the context, the amount of data being loaded, and possibly other factors as well. There are no canonical models - only business contexts that are best served with models crafted to built suit the circumstances, regardless of the unbounded predilections for reuse that we sometimes succumb to.

Lazy loading is harmful if you use it in support of naive class modeling. The absence of lazy loading in object-relational applications is just as harmful, leading to infrastructure code bloat in non-infrastructure classes, poor encapsulation, higher complexity, higher coupling, and code that is harder to understand, test, and maintain.

Any trepidation that people usually have with lazy loading is often rooted is misconceptions with object-oriented programming and design. To use lazy loading safely, start by modeling the object-oriented parts of your application according to object-oriented principles, and recognize that there are different principles for object modeling and data modeling, and that this is a good thing rather than something to hide from.

Any tool is considered harmful when used improperly or naively. Or, like my favorite software development quote says:

Every tool is a weapon - if you hold it right
- Ani Difranco

Friday, December 19, 2008

Acceptance Tests

From a programmer's perspective, having customers write their own acceptance tests is a tremendous advantage. It brings customers closer to the project and allows customers to take ownership of their expectations of the team, and it provides a means to prove that the team is in fact delivering.

It often also gives customers a sense for the effort involved in verifying that software meets expectations, the pace of this work, and the level of detailed thinking required to pull it off.

We've gone to great lengths to build tools and instill supporting practices that allow customers to express their expectations in structured formats that can be executed directly by customers. It's not exactly easy as it sounds, and programmers have to do some development work to enable the customer's tests to actually be executable, and to maintain the technology.

We're usually able to convince customers to participate in this style of acceptance testing practice. The proposition of having an immediate and automated way to prove that the work done by developers was done to spec is compelling. It's usually an easy sell. Unless a customer has been through it before, he inevitably underestimates the effort, and programmers inevitably overstate the ease.

In the end, the effort of writing acceptance tests with these tools is closer to programming than traditional means to communicate expectations. Customers often abandon the customer testing initiative once they get a clear picture of what it really means to be a tester - even with the tools that we've cooked up for them. The development team then becomes responsible for maintaining these tests using these tools that are intended for end users rather than using tools that are appropriate to programmers.

As software developers, we over-step our bounds when we invite customers to become testers. Customers want to tell us what they want; they want to know that we understood their expectations; they want to know how long it might take to build the software; and they want to use it when it's done. And that should be more or less the full extent of the expectations that a development team has of a customer.

If a customer is really valuable to the team, then the customer's input is likely informed by his on-going experience in the business - experience that we shouldn't expect to be put on hold while he writes tests using tools that programmers think are appropriate and sufficient but that experience suggests are neither really appropriate nor sufficient.

We have great, simple tools that developers and testers use for testing software. We've had them for some time. The outputs of these tools hasn't been of much use to customers to help them understand how much of an application is complete, what the application does, and whether the software actually works. This is more a failure on the part of development teams for not making the artifacts produced by these tools more human-consumable and more informative.

By bringing a usability focus to the creation of tests, development teams can write tests that suit a customer's need to know as well as the team's need to write tests.

Contemporary testing frameworks augmented with updated test authoring (and authorship) principles and practices can close the gap that we had presumed to solve by burdening customers with testing. We can use these frameworks and practices to provide visibility without requiring those who may want to have visibility into detailed project progress to become testers.

We can export meaningful specifications of the software from well-authored tests, and customers can read and even sign off on these if necessary. These tests need to be crafted from the start for this purpose though, and this is a practice that developers at large haven't picked up yet. The problem of writing good tests becomes a problem of competent engineering as well as a problem of authorship. We're caused to exercise our abilities as authors and communicators as well as programmers, becoming better at both as we go.

However, even with these tools we developers are still suffering the delusions of our own presumptions.

Some customers may want to dive into the details up to their eyes, but many customers just want to get the software in hand and get their jobs done. We may be inclined to think of this as negligence and naivety on the part of customers, and maybe it is in some cases, but we need to see our own biases in presuming that customers should be neck-deep in the details.

Software teams need domain experts and people with product design expectations to give them direction, and re-direction. Why do we believe that having customers embedded in the project team is the only way to achieve this? There's no doubt that having a customer in the midst is effective, but is it the only way?

What if our product managers had deep experience in the problem domain, and if they were domain experts themselves? What if they were also competent engineers and product designers? What if they could speak for the customer? And what if they could still code?

If we had one of these people leading our product development efforts, would we need to have customers writing acceptance tests?

The Toyota Product Development System defines the Chief Engineer role. This person (along with a staff, on large projects) is a domain expert, product designer, customer voice, and an engineer of towering technical competence.

A Chief Engineer in a software product development organization can write acceptance tests without the burden of elaborate end-user testing tools. He can use the common tools of the trade. He understands the imperatives of using tests as documentation and uses usability-focused test authorship, and sets standards for authorship that his organization cultivates and follows.

Tools in the Fit lineage have their place, and can be valuable, but in many cases they are a sign of a problem that might better be dealt with as an organizational problem rather than a tooling problem.

If your product development organization isn't led by someone along the lines of Toyota's Chief Engineer, then you are going to have to put some compensators in place for not having a better system for product development. One of those compensators might indeed be an attempt to have customers write tests with Fit (or a similar tool), but these efforts to support the practices engendered by these tools often end up being really quite expensive, and their uses should be constrained and supplemented with more contemporary approaches to testing/specification/documentation problems.

We have yet another problem to solve along the way: we believe that some tests should be written in the arcane, overly-technoized authorship style typical of developers who get lost in the details and forget that code is for humans as well as computers. Computers need little help in reading even the most arcane program code, and so our code authorship decisions should be made almost always in consideration of human readers.

There's a debilitating bias that software developers have that constantly works against our ability to produce well-authored, usability-focused test code: we believe that there's such a thing as an acceptance test.

All tests are acceptance tests. If you have test code that doesn't participate in proving the acceptability of your software, then it's likely either not test code, or not needed.

The differentiator of test code as either acceptance test code some other kind of test code seems to be the readability of the code by real people. If non-programmers can understand a test, or its output, then the test is likely an acceptance test. We permit all other test code to be written so that only technophiles can benefit from the knowledge that it contains, and often obscures.

There is a bitter irony in this whole techno cultural bias toward the need for usability in code. When tests are written for non-programmer readers, programmers also benefit from the clarity and usability of the code when working on unfamiliar or even forgotten parts of a system.

We need to surface knowledge from the code that we write. We need this knowledge to support communication between everyone with a stake in the project - from programmers to customers. We need this knowledge to help us understand whether to product being built meets expectations and to understand the progress of the project.

Achieving this doesn't necessarily drive a need for end-user editable tests and an embedded customer. To continue to believe that we must have end-user editable tests and an embedded customer to succeed will add lethargy to our ability to consider meaningful alternatives that may benefit our efforts and organizations over and above proving that our software products meet expectations.

We need domain experts, leaders, engineers, product designers, customer voice, and business sense. And we also need means of proving our products and communicating proof and expectations. We have organizations that allow us a range of degrees of achievement depending on what kinds of people, processes, and tools we can put it play.

Some organizations allow for greater achievement than others and we can strive to learn how they succeed and to even be influenced by them. We can get stuck in a rut if we don't realize that the people, processes, and tools that we lean on today are often expressions and reflections of our current organizations and biases rather than a recipe for achieving our best potential.

Wednesday, December 17, 2008

Chief Engineer

Without a role like Toyota's Chief Engineer, direction and leadership in software development is often as coordinated as a three-legged race - if you're lucky. Much software product development leadership is a four- or five-legged entrant into a market race that is starting to see leaner, better-coordinated competition.

Consider the responsibilities of a Chief Engineer at Toyota*:
  • Voice of the customer
  • Customer-defined value
  • Product concept
  • Program objectives
  • Architecture
  • Performance
  • Product character
  • Product objectives
  • Vision for all functional program teams
  • Value targets
  • Product planning
  • Performance targets
  • Project Timing
He is the person responsible for the design, development, and sale of the product. He is the organizational pinnacle and the hub through which authority and ability flow. The CE isn't just an architect or technical lead or just a customer proxy or just a project manager or just process master. He's all of these things and more. He doesn't just pass along customer requirements for the product, he defines them. He doesn't just implement the business's design for the product, he creates it. He's large and in-charge, and he's uniquely and deeply qualified to be so.

Because all of these abilities and authorities are invested in one extremely capable, senior, trusted product development person, the coordination of the various perspectives, values, and vision of a product and its execution don't suffer design-by-committee issues. And because the CE has these many responsibilities and abilities, he's a rare person.

Consider the typical qualities of a Chief Engineer at Toyota*:
  • A visceral feel for what the customer wants
  • Exceptional engineering skills
  • Intuitive yet grounded in facts
  • Innovative yet skeptical of unproven technology
  • Visionary yet practical
  • A hard-driving teacher, motivator, and disciplinarian, yet a patient listener
  • A no-compromise attitude to achieving breakthrough targets
  • An exceptional communicator
  • Always ready to get his or her hands dirty
Part of the reason that we're missing the Chief Engineer in the software business is that we're not growing them. At Toyota it can take 14 to 20 years to cultivate a Chief Engineer. Toyota is structured as a learning organization where fostering a learning culture is a primary business activity. It's a part of the organization's DNA. It's not a fluffy, fleeting, sideline of the business, it is the business - and the organization, process, culture, and methodology dance to its drum.

We're still largely in the software dark ages, celebrating the brute force machismo of a few mutant hero programmers rather than taking continuous improvement as seriously as some of the world's most successful and ethical business.

In software projects we have non-coding architects who live behind closed doors and who are no-longer in-touch with the materials and tools used in building contemporary and innovative products. We have project managers who believe that they can schedule work without having an inherent sense of the technical constraints that engender work plan dependencies. We have technical leads who feel entitled to not deal with customers or to walk in their shoes. We have on-site customers who have been given the right to dictate production sequence without any sense of flow, leveling, and production physics, and the opportunities and advantages to be wrought from them. We have scrum masters who's presence on our teams might be the clearest indicator that things have gone horribly, horribly wrong.

We have fragmentation looking for consolidation. We have tremendous opportunities waiting to be capitalized. We're missing people of great depth and great breadth, with great ability and extensive experience who can wring the medieval organizational behaviors from software projects by guiding them toward engineering excellence and insightful, visionary, and methodical product design and execution.

We have a gaping human resource hole in our software product development organizations through which incredible value continues to evaporate. We need to stand right where we are and accept the antiquated organizational mess that we have created and that we perpetuate by not looking outside of our industry to the exemplars of the meritorious success that continues to elude us.

A single Chief Engineer runs fast, lean, and decisive. He will inevitably out-pace the motley comedy of the architect/project manager/lead tech/on-site customer conglomeration that your lethargic competitor will pit against him.

We need to start today to build the learning organizations and learning cultures that will produce the Chief Engineers that our industry needs. It'll take years to get there, and if we don't start now, it'll take even longer.



* Toyota Product Development System, Morgan and Liker

Saturday, December 13, 2008

Does Test-Driven Development Speed Up Development?

The answer to the question, "Does test-driven development speed up development?" depends on what you personally believe "development" is.

If you're a programmer, focusing on the task at hand, then the answer is likely a resounding, "No, test-driven development doesn't speed up development." Any given programming task will be weighed down with the extra test code you'll be writing, and the extra thinking that TDD forces you to do.

If I closed one eye and looked at the world through a straw, I'd only a see small part of my overall work and responsibilities. I would likely have no purview of how my work fit into the larger workflow and the work done before the coding effort and after the coding effort. With such a constrained view, I wouldn't see how my work impacts others who are at work in areas of operation that feed work into the programming phase, and the areas of operation that are fed by the programming phase.

If you asked whether it would be quicker to finish a given task if I was able to write the code correctly the first time without defects or design flaws, and precisely fulfilling customer and stakeholder expectations, then it would certainly be faster for me to do that work without writing tests at all, let alone writing tests first.

If you believe "development" means all the work done in turning concepts into cash - from ideation through design, implementation, inspection, packaging, and delivery - then TDD absolutely speeds up development.

Replace the term "development" with "producing software" or "software production", and understand that we're not just talking about the work involved in writing program code for an isolated bit of functionality. Production is everything involved in turning an idea into a shipping a product, or even a new feature for an old product or a change to a feature. It's the entire pipeline of work.

The software development business has an out-dated idea of what productivity is. It's an idea the lags almost twenty years behind many other product development industries.

We try to achieve productivity by making our jobs as efficient as possible. We believe that if our job is to write code, then writing less code will make us more productive. We adopt generation after generation of high-speed programming automation tools with taglines like, "Write seventy percent less code with the new version of Visual [insert name of next great visual whiz bang here]."

Good tools are a must and automation is necessary, but they rarely contribute to higher productivity when used to increase local efficiency. Focusing on local efficiency usually drives productivity down.

Test-driven development doesn't require elaborate tooling and high-speed automation. It generates its seemingly unlikely productivity by supporting the entire production pipeline's flow. It's not glamorous, but it participates in optimizing the whole while helping to avoid the trap of local efficiencies.

Whether you are personally going slower as a programmer because you're doing test-driven development isn't a relevant concern unless this slowness is also driving down the speed of the entire production system.

Test-driven development decreases complexity, improves the incremental adaptability that software product development depends on, astronomically reduces the amount of rework that destabilizes schedules, and reduces the unrecognized design flaws that decrease productivity after the initial implementation phase.

Test-driven development supports flow. The software development industry at large is years away from recognizing that flow rather than efficiency is what creates giant leaps in productivity. Nonetheless, it works, and it's supported by the production physics used by industries that are well ahead of software development in product development and production maturity and optimization.

Test-driven development may require you to have nerves of steel while you're adopting it and dealing with the antithetical notion of going slower to speed up, but it will speed things up. It just might not speed you up. At least not until you broaden your perspective and interest so that they include the entire production system.

Test-driven development touches so many aspects of the entire production pipeline that when recognized and practiced as a systemic optimization rather than a mantic, esoteric bit of programmer wizardry, it increases the productivity of the whole effort. And because of this, it's one of the best things that the programmers bring to the whole production effort.

Friday, December 12, 2008

Testing and Reverse Engineering

Waiting to start testing software until after programmers are done writing the code it is really expensive. It's less expensive to have programmers do test-first programming, and to bring traditional system testing on-line before the programmers get started. Without putting a lot of measurement behind it, and relying only on experience and observation, end-of-line testing is likely to be easily twice as expensive, but likely several multiples more expensive.

Test-last development involves reverse engineering, and reverse engineering is really expensive.

Typical software production schedules simply don't have room for the kind of thoroughness that allows the reverse engineering to be a reasonable approach. The results of reverse engineering in test-last development in software are always incomplete. The incompleteness leads to defects that are found later rather than sooner which causes rework on the parts of the software that is defective and all the parts of the software that has been written in the interim that touches the defective parts. The rework interrupts the flow of value-add work on the product, makes the schedule unpredictable, de-stabilizes the team's focus, and delays delivery.

When a tester sits down to test a new feature, he first has to figure out what how the feature works, what it does, and what kinds of observable results that using the feature causes that can be turned into tests and test plans. That process is reverse engineering. Even if the programmer communicates the various execution paths and usage scenarios for the feature, some percentage of execution paths will not be communicated to the tester, or will not be perceived by the tester,
which further extends the amount of necessary exploratory testing he'll do - yet more expensive, time-consuming, inaccurate and incomplete reverse engineering.

When tests are written before coding is done, both as acceptance test-driven development and developer test-driven development, the software is delivered to final inspection with a thorough set of executable explanations of what the software does. Testers then know what the software does and how to set up the conditions to put it through its paces and what observations should be made about the effects of putting the software through those paces.

Even if developers are writing unit tests, if they are writing them after they write the functional code, they are incurring the same kind of reverse engineering expense that testers are struck with. It's a smaller form of the reverse engineering expense, but the instances of it are much more pervasive. While each individual cost may be small or even arguably negligible, the sum of all these pervasive instances of small costs is quite large, and definitely not negligible.

We think of writing tests after writing code as a natural order. The opposite is true. Writing tests after writing functional code is reverse engineering, and writing tests first as specifications and as proofs is the natural order.

Wednesday, December 03, 2008

Iterative AND Incremental

A few years north of a decade ago, the terms "iterative and incremental" were how folks increasingly spoke about software development.

Somewhere during the the past handful of years, coincident with the mainstreaming of agile, we lost track of the “incremental” part of “iterative and incremental”, and began to talk almost exclusively about software development as “iterative”.

“Iterative” has become practically an omnibus term, and somewhat meaningless in many of the contexts we use it in. We use “iterative” in contexts where “incremental” is likely more appropriate, as in, “That feature will be available in the next iteration of the product,” rather than, “That feature will be available of the next increment of the product.”

Further complicating things, we commingle the notion of fixed timeboxes with "iterative". We try to start and finish work items coincident with these "iterations". We do planning in iteration-sized chunks. And we schedule meetings and deliveries so that they too are coincident with "iterations".

I started to shift to Lean software development earlier this year, and started questioning (and later abandoning) Agile’s typical fixed timeboxes, and replacing them with continuous flow.

As I move away from timeboxes and the fixed timebox that we refer to in agile as iteration, I find myself thinking and scheduling in increments. Not only does this gel more with the reality of what’s really going on in the team and the project, but it also gels with the whole message of “iterative and incremental”.

Iterative development is how we build increments of our product, but it's not really the defacto scheduling or product design unit that we’ve come to use it as. Iterative is a quality the describes and governs each move we make in almost every aspect of turning ideas into products. It’s a workstyle rather than a yard stick.

In losing track of the “incremental” – even if in nomenclature alone - we’ve gradually become uncentered. And in filling our senses almost exclusively with the “iterative”, we’ve taken the iterative to the extreme and tried apply it to facets of software development that are more appropriately served by the other half of the equation. And not surprisingly, we’ve suffered the repercussions of imbalance and excess.

With a more balanced perspective of “iterative and incremental” we have the opportunity to step back and more easily see that there are alternatives to fixed timeboxes that are perhaps more natural and maybe even more effective. And we have the opportunity to see that scheduling is just scheduling, resource planning is just resource planning, deliveries don’t always have to happen every Friday, customer demos don’t make sense until features are meaningful to customers, and iterations aren’t fixed timeboxes unless we say they are.

And none of this in any way refutes what we’ve learned about the benefits of smaller batches.

And maybe without "iterations" we'll get an a bit better understanding of how trying to have a single heart rate without leveling flow is like trying to have a single heart rate whether you're running up a hill or napping in your hammock. And maybe we can start to turn our focus to flow as the enabler and see the heart rate as the natural side effect of just being at work while being alive.

Monday, December 01, 2008

Momentous Oredev

As others have already said, Oredev was a really good conference – if not an outright great conference.

I’m hesitant to gush poetic about why Oredev might just have been a great conference, or why it was likely my favorite conference, because I’m not exactly sure I know which specific quality of Oredev might have made it great.

So, I can’t prove that it was great by pointing to one great thing that would make my point, and maybe this is why Oredev was great. It wasn’t over-stated. It wasn’t blown out of proportion. It didn’t try to be bigger than it was. It was just right.

The conference had a number of tracks, like any other conference. There was a Java track, a .NET track, a languages track, a leadership track, a testing track, an agile track, another .NET track, a Domain-Driven Design track. But the people didn’t seem to be Java people, .NET people, agile people, etc. The people at Oredev just seemed to be interested in a whole lots of ideas no matter what their core competency or primary focus, and this, combined with the spirit of the event, setup a very collegial and very social time and space.

The theme of the conference was “The Software Development Renaissance”. It was pointed out during the conference panel that periods of renaissance are characterized by interdisciplinary pursuits, and personified by people who are predisposed to the intellectual curiosity that leads them to interdisciplinary works and investigation. This definition of renaissance likely applies to Oredev itself, and whether it was the intention or not, Oredev was quite possibly a great example of a renaissance conference.

It was also pointed out that the alt.net movement is likely a good example of an emerging software renaissance movement. The alt.net track at Oredev was very well received and was standing room only in each session.

If you’re an alt.net’er in Europe, Oredev is a welcoming gathering place for you, and I have no qualms with wholeheartedly suggesting that you gather there next year – and not just because there’s an alt.net track, but because the entire conference is brain candy for alt.net kinds of people. You might not have thought of Sweden as a bastion of alt.net support, but I think that the average alt.net'er will be pleasantly surprised with the experience.

I met so many great people and had so many awesome, buoyant conversations at Oredev. The conference to me was a collection of great moments that came together because a good group of passionate organizers with community and business support carved out the time and space for good people to interact, and stepped out of the way and let the quality of the speakers, delegates, and support staff and their natural willingness to explore and exchange become the social foundation for not only the conference’s content, but also for the lasting relationships that started at Oredev.

Oredev in one word for me is Momentous. It would say it was momentish, but that’s not quite the right word to capture the momentness nature of the gathering. Oredev succeeded in creating a conference that is much larger than an average open space gathering, and scaling it without loosing the intimate sociability of an open space.

At Oredev, it felt like people had a vested interest in each other; that ideas and knowledge were paramount; and that debate and exploration were sacred and not to be diluted with mindless pandering. In fact, I think the radical diversity of the participant body simply didn’t allow for a presumption of anything other than a conference of ideas and exchanges without the obstructive social morays that impede the dialog at the average mono-cultural vendor conference. This societal quality above all else makes the kind of conference that I want to be a part of, and that I’m very happy to have been a part of.

Many thanks to Michael Tiberg, Magnus Mårtensson, Linus Roslund, and Björn Granvik for infusing the event with their personalities and their spirit, and for being such awesome hosts, not to mention great new friends.

It was great hanging out with old friends and new friends in a great city that itself is very much a representation of renaissance values.

Monday, July 14, 2008

Sustaining Capacity in Maturing Agile Software Teams - Part 4: Counter Measures

Mature agile practices optimize learning and communication, enlisting all project artifacts and processes into the effort.

By creating a learning culture that defends its means of communication from entropy and obstruction, and that rigorously eliminates waste, a team can continue to optimize its performance, avoid bottlenecks, and satisfy customer without degrading throughput.

Smooth oscillations in team performance by:
  • Eliminating waste, and
  • Improving continuously
The obstructions and friction that an agile team deals with as it matures are often the result of waste. Waste permits the buildup of entropy that makes it difficult to improve continuously.

A production team will undergo necessary radical improvements. Radical improvements are necessary and welcome, but they can be disruptive.

The team can avoid unnecessary disruptions by taking a tough stance on waste; learning to see waste, and to counter act it by sharpening its ability to recognize opportunities for continuous, incremental improvement.

A team can learn to counteract waste by working to clear its existing entropy buildup. The following practices can be effective counter measures:

Soluble Code
Soluble code is understandable with little effort on the part of the user.

Most software code is written without consideration for the reader. Developers using that code later, either to make a change to the system or to understand the system, have to decipher the code before its meaning can be unlocked and work can be done.

Soluble code is like a book with a table of contents. It allows the user to scan the content to discover where his work site is, and to quickly garner understanding of the intended purpose and functioning of the code without forcing the reader to read and decipher code that isn't germane to his present concern.

Solubility in code is as much about writing code to allow the user to identify and skip over non-relevant code is it is about writing relevant code that instantaneous transmutes into knowledge and understanding, and guides the present work to be done.

The time spent teasing understanding from code is waste. Code can be written so that users of the code spend the least amount of time focused on things that aren't of interest to the current task, and so that learning can happen opportunistically and osmotically.

Soluble code patterns highlight the essence of the code rather than the ceremony of the programming language or the frameworks being used.

Context/Specification
The Context/Specification pattern expresses each permutation of a use of a class as a separate test class, or context. Contexts are natural and recognizable uses of a class or subsystem or web page, etc. The pattern encourages test code, or specifications, that is much more soluble.

Contexts and specifications are written to describe the use of the system, or the experience, rather than the implementation of the system or test itself. This makes test code easier to navigate for team members who aren't familiar with any particular collection of tests and test code.

The experiential language permits the contexts and specifications to be exported and used as external documentation of the system that is usable to people who are intimately familiar with its design and operation, as well as people who are trying to gain familiarity, or to assess the correctness of the system against expectations.

Tests written by developers and by quality assurance are written in the same experiential language. This reinforces the shared language that the team members use to communicate with each other and with customers. This consistency helps to surface misunderstandings that may lead to defects and rework.

Context/Specification places a heavier onus on the precision of user stories and the expression of acceptance criteria. This causes the team to go deeper into analysis and design during planning, and to surface assumptions and misunderstandings that often cause missed expectations and rework.

Test-Driven Development
Unit testing is a quality assurance activity. Unit testing proves the correctness of the software under development and helps to protect the team from creating new defects when making necessary changes.

Unit testing is often done after a developer has implemented a feature of function, or after a change has been made.

Tests are written before functional code when practicing Test-Driven Development (TDD). By writing tests first, developers are naturally caused to think through their intended design ideas and implementations.

Test-first programming allows programmers to design and code in smaller chunks. Object-oriented code is often best served when units of design are kept small. Loose coupling and high cohesion are the result of using fine-grained designs, and these qualities in turn cause code to be easier to understand, easier to maintain, and easier to test.

Smaller form factors help reduce duplication, which is the principal source of errors and defects stemming from inconsistency, and small factors also tend to lend themselves to greater reusability.

Developers use Test-Driven Development to create right-sized designs from the start, leaving a legacy of code that is supple, and easier to tune and adapt.

Test-Driven Development prevents the design rigidity that acts as an entropy attractor and amplifier.

Design Improvement
When entropy has accumulated, more radical design changes are often required.

It's not desirable to keep a product team mired in renovations and repairs that put business and customer imperatives on the back burner. At the same time, it's very ineffective for a team to be working on business features while simultaneously doing design improvements on the code that these features are built upon.Design improvement teams can work one iteration ahead of feature teams, softening up the rigidity that the feature teams will be working on in the following iteration to the extent that is possible based on the current design and architecture.

There are reasonable limits of what a design improvement team can do since its work tends to focus on aspects, frameworks, and patterns that affect the implementation of a good deal of the application code.

Design improvement teams introduce seams and shims into frameworks and architectures to try to isolate feature teams from widespread destabilization of common code. They then do whatever they can to break ground in areas that will be affected in subsequent feature work.

Design improvement requires a greater effort in planning and design. A design improvement team works one iteration ahead of the current iteration. It doesn't implement the future iteration stories, but considers necessary changes to frameworks, aspects, and architecture than will enable the feature team to create code attracts less entropy, and makes changes to those areas of the system that it can without upsetting the iteration in-progress.

Synchronized Teams
Software development teams ideally work as synchronized units, however some activities naturally happen sequentially.

Quality assurance testing naturally happens after software has been created, however the amount of development work that is batched up before testing should be kept to a minimum in an effort to avoid waste.

Batches of untested software are inventory, and inventory has a material carrying cost as well as a impact on the ability to clearly see and understand the work in progress.

To reduce inventory, quality assurance testers must begin their test plans and implementation concurrently with the beginning of the design and implementation of a feature. Developers and testers begin work on a feature by planning together, and a feature team made of developers and tester(s) work together throughout the production of the feature to remain synchronized.

Developers or testers slide in and out of development and testing functions in an effort to keep one function from getting too far behind the other, which leads to queuing and batching that leads to more waste.

A developer or developer pair that finishes his work well ahead of the testers should assist the testers. Developers remain abreast of testers' work so that this transition is as smooth as possible.

Developers should avoid moving on to the next feature until the testing is done on the current feature. Moving on to the next feature before testing is done is a synthetic progress, and while it might feel like an accomplishment in productivity for a developer, it often sub optimizes the whole of production.

Friday, July 11, 2008

Sustaining Capacity in Maturing Agile Software Teams - Part 3: Recognizing Entropy

The visible and tangible differences between traditional phase-based development and agile development are much more obvious than optimizations made later within an agile practice. The next round of improvements can be harder for agile teams to embrace than the original effort to embrace agile development.

Each successive improvement is increasingly subtle relative to the previous improvements. Product development organizations usually have to first start seeing signs of emerging entropy before they have the necessary context to consider new, subtle improvements.

When a product development organization starts sensing entropy, its first decision should be to decide when it needs to act upon it, rather than to decide to presume to begin to act upon it immediately. Acting too early can mean that the team might be acting without sufficient context and understanding of the sources of entropy.

If the team recognizes frustration with its current ability to perform, and if it has some shared understanding of its constraints, then it can potentially begin to remove the constraints immediately.

The following are common issues for agile teams, and represent areas where bottlenecks and entropy collect:
  • Code. Code is often written merely to work and to execute, without consideration for the waste incurred by forcing teammates to decipher code in order to get at its purpose, and to work with it. Code is one of the primary means of communication on an agile team. It must be written to communicate to other programmers on the team, and must quickly generate understanding.
  • Design and Architecture. Systems and software work but are not readily understandable and adaptable by all members of the team. A small set of well understood and familiar patterns are often over-used. Object-oriented design fundamentals are often esoteric and impenetrable during the building of a first or second-generation product. Developers come to understand their significance often only by introducing design friction that is only seen in retrospect as a series of violations of software design laws and principles.
  • Tests. Initially, teams tend to craft tests as quality assurance efforts rather than design, specification, and documentation efforts. Tests should be easy to scan, enabling developers to get an immediate understanding of the system and the impact of the work they need to do, as well as an understanding or where to find their work site (or sites) within the code and supporting artifacts. Tests are the most important form of documentation on an agile team.
  • Development and QA Test Synchronization. Without greater synchronization between development and QA testing, valuable input often comes too late in a development or release cycle to be effective. Test design and test architecture are valuable inputs to development, and software design and architecture are valuable inputs to QA testing. Teams often loose unrealized capacity by not pursuing means to do more development and testing in parallel.
A software team continues to adjust and refine its practices based on the friction it faces and the observations it makes. Entropy still collects.

A critical mass is inevitably reached and the team makes course corrections that are often broader than the iteration-to-iteration practice and tool calibrations.

Thursday, July 10, 2008

Sustaining Capacity in Maturing Agile Software Teams - Part 2: Entropy

Teams that have adopted an agile approach to software development adopt new disciplines in an attempt to reach a level of productivity and effectiveness above their achievements with previous approaches or methods.

There are common practices and disciplines found on most agile teams, including:
  • Developer testing (unit testing, and possibly Test-First Programming)
  • User stories (analysis and scoping)
  • Time-boxed delivery cycles (iterations)
  • Continuous Integration
  • Automation
  • Collaborative work
  • Deep customer involvement
  • Rapid feedback
Agile teams use these skills and tools to deal effectively with the inevitable change that is part of software development.

Previously, the team may have obstructed necessary change because of an inability to use effective counter measures that accommodate the business and the team in adapting to new opportunities and new constraints.

Counter measures often produce inherent bottlenecks that point the to the need for subsequent optimizations. Bottlenecks manifest in the following ways:
  • Code mass growth. By adopting developer testing, the team has volunteered to double the amount of code and systems artifacts that it maintains. It maintains this added code without increasing the team's resources, often gradually degrading the team's capacity.
  • Design rigidity. The team uses traditional software designs that aren't optimized for the rate of change that the surrounding business and the team itself has become acclimated to.
  • Growth constraints. The pace of an agile team and the incremental successes that allows for advancement of a product drives a need to grow the team and its people. New people are brought in to naturally increase capacity, but it takes longer than expected or desired to become meaningful contributors to the effort.
These constraints are symptoms of the entropy that subsequent optimizations should address.

Without addressing the production entropy, it will ultimately constrain the business's ability to take advantage of new opportunities, which will ultimately lead to decreased throughput and entropy in the business.