Friday, December 19, 2008

Acceptance Tests

From a programmer's perspective, having customers write their own acceptance tests is a tremendous advantage. It brings customers closer to the project and allows customers to take ownership of their expectations of the team, and it provides a means to prove that the team is in fact delivering.

It often also gives customers a sense for the effort involved in verifying that software meets expectations, the pace of this work, and the level of detailed thinking required to pull it off.

We've gone to great lengths to build tools and instill supporting practices that allow customers to express their expectations in structured formats that can be executed directly by customers. It's not exactly easy as it sounds, and programmers have to do some development work to enable the customer's tests to actually be executable, and to maintain the technology.

We're usually able to convince customers to participate in this style of acceptance testing practice. The proposition of having an immediate and automated way to prove that the work done by developers was done to spec is compelling. It's usually an easy sell. Unless a customer has been through it before, he inevitably underestimates the effort, and programmers inevitably overstate the ease.

In the end, the effort of writing acceptance tests with these tools is closer to programming than traditional means to communicate expectations. Customers often abandon the customer testing initiative once they get a clear picture of what it really means to be a tester - even with the tools that we've cooked up for them. The development team then becomes responsible for maintaining these tests using these tools that are intended for end users rather than using tools that are appropriate to programmers.

As software developers, we over-step our bounds when we invite customers to become testers. Customers want to tell us what they want; they want to know that we understood their expectations; they want to know how long it might take to build the software; and they want to use it when it's done. And that should be more or less the full extent of the expectations that a development team has of a customer.

If a customer is really valuable to the team, then the customer's input is likely informed by his on-going experience in the business - experience that we shouldn't expect to be put on hold while he writes tests using tools that programmers think are appropriate and sufficient but that experience suggests are neither really appropriate nor sufficient.

We have great, simple tools that developers and testers use for testing software. We've had them for some time. The outputs of these tools hasn't been of much use to customers to help them understand how much of an application is complete, what the application does, and whether the software actually works. This is more a failure on the part of development teams for not making the artifacts produced by these tools more human-consumable and more informative.

By bringing a usability focus to the creation of tests, development teams can write tests that suit a customer's need to know as well as the team's need to write tests.

Contemporary testing frameworks augmented with updated test authoring (and authorship) principles and practices can close the gap that we had presumed to solve by burdening customers with testing. We can use these frameworks and practices to provide visibility without requiring those who may want to have visibility into detailed project progress to become testers.

We can export meaningful specifications of the software from well-authored tests, and customers can read and even sign off on these if necessary. These tests need to be crafted from the start for this purpose though, and this is a practice that developers at large haven't picked up yet. The problem of writing good tests becomes a problem of competent engineering as well as a problem of authorship. We're caused to exercise our abilities as authors and communicators as well as programmers, becoming better at both as we go.

However, even with these tools we developers are still suffering the delusions of our own presumptions.

Some customers may want to dive into the details up to their eyes, but many customers just want to get the software in hand and get their jobs done. We may be inclined to think of this as negligence and naivety on the part of customers, and maybe it is in some cases, but we need to see our own biases in presuming that customers should be neck-deep in the details.

Software teams need domain experts and people with product design expectations to give them direction, and re-direction. Why do we believe that having customers embedded in the project team is the only way to achieve this? There's no doubt that having a customer in the midst is effective, but is it the only way?

What if our product managers had deep experience in the problem domain, and if they were domain experts themselves? What if they were also competent engineers and product designers? What if they could speak for the customer? And what if they could still code?

If we had one of these people leading our product development efforts, would we need to have customers writing acceptance tests?

The Toyota Product Development System defines the Chief Engineer role. This person (along with a staff, on large projects) is a domain expert, product designer, customer voice, and an engineer of towering technical competence.

A Chief Engineer in a software product development organization can write acceptance tests without the burden of elaborate end-user testing tools. He can use the common tools of the trade. He understands the imperatives of using tests as documentation and uses usability-focused test authorship, and sets standards for authorship that his organization cultivates and follows.

Tools in the Fit lineage have their place, and can be valuable, but in many cases they are a sign of a problem that might better be dealt with as an organizational problem rather than a tooling problem.

If your product development organization isn't led by someone along the lines of Toyota's Chief Engineer, then you are going to have to put some compensators in place for not having a better system for product development. One of those compensators might indeed be an attempt to have customers write tests with Fit (or a similar tool), but these efforts to support the practices engendered by these tools often end up being really quite expensive, and their uses should be constrained and supplemented with more contemporary approaches to testing/specification/documentation problems.

We have yet another problem to solve along the way: we believe that some tests should be written in the arcane, overly-technoized authorship style typical of developers who get lost in the details and forget that code is for humans as well as computers. Computers need little help in reading even the most arcane program code, and so our code authorship decisions should be made almost always in consideration of human readers.

There's a debilitating bias that software developers have that constantly works against our ability to produce well-authored, usability-focused test code: we believe that there's such a thing as an acceptance test.

All tests are acceptance tests. If you have test code that doesn't participate in proving the acceptability of your software, then it's likely either not test code, or not needed.

The differentiator of test code as either acceptance test code some other kind of test code seems to be the readability of the code by real people. If non-programmers can understand a test, or its output, then the test is likely an acceptance test. We permit all other test code to be written so that only technophiles can benefit from the knowledge that it contains, and often obscures.

There is a bitter irony in this whole techno cultural bias toward the need for usability in code. When tests are written for non-programmer readers, programmers also benefit from the clarity and usability of the code when working on unfamiliar or even forgotten parts of a system.

We need to surface knowledge from the code that we write. We need this knowledge to support communication between everyone with a stake in the project - from programmers to customers. We need this knowledge to help us understand whether to product being built meets expectations and to understand the progress of the project.

Achieving this doesn't necessarily drive a need for end-user editable tests and an embedded customer. To continue to believe that we must have end-user editable tests and an embedded customer to succeed will add lethargy to our ability to consider meaningful alternatives that may benefit our efforts and organizations over and above proving that our software products meet expectations.

We need domain experts, leaders, engineers, product designers, customer voice, and business sense. And we also need means of proving our products and communicating proof and expectations. We have organizations that allow us a range of degrees of achievement depending on what kinds of people, processes, and tools we can put it play.

Some organizations allow for greater achievement than others and we can strive to learn how they succeed and to even be influenced by them. We can get stuck in a rut if we don't realize that the people, processes, and tools that we lean on today are often expressions and reflections of our current organizations and biases rather than a recipe for achieving our best potential.