With agile processes and the latest development tools, has more modern software development practice outgrown the need for formal inspections?

Considerable research throughout the 1970's, 80's and 90's repeatedly found formal design and code inspections, when done well, significantly increased software development productivity and decreased the time and cost of testing. Over the last decade, however, development tools have improved substantially and many organizations have adopted so-called agile approaches and processes. For those that have not embraced, or cannot embrace, agile approaches, and continue to use traditional tools and processes to produce software, one of the best quality assurance improvements a team or organization can make is still the introduction of effective inspections. The question is whether, with agile processes and the latest development tools, more modern software development practice has outgrown the need for formal inspections?

Agile Processes and Inspections

For a large number of people, agile processes mean either Scrum, eXtreme Programming, Lean Software Development, or some combination of all three.

  • eXtreme Programming discusses inspecting design and code, recognising it as a good practice, and 'turning that up to eleven' so that inspection happens all the time through pair programming. I have no doubt that pair programming generally produces higher-quality code than developers working alone without any form of walkthrough, review or inspection. That pair programming is a general improvement over formal inspections remains unproven and, in my experience, unlikely; I have yet to see any irrefutable evidence or convincing reasoning supporting such a hypothesis. In addition, my experience working with teams professing to be 'doing XP', is that pair-programming is the first practice to get dropped by managers when a project comes under schedule pressure or by developers finding it not generally delivering enough benefit to them. This leaves a XP project without any check on internal quality. Unit tests and reviews of function with customers check the externally visible aspects of the software. Inspections also check that the software is written so that it is easy to understand, maintain and extend.

  • Scrum neither forbids or recommends any engineering practice per se. You will not find anywhere in Ken Schwaber's books where a team is forbidden from holding a formal design or code inspection or pair-programming is mandated. Neither will you find anywhere in those books were a team is recommended to hold one or forbidden to practice pair-programming. Scrum is about effectively managing living, prioritsed lists of requirements and sequences of short development iterations for small, autonomous development teams. The technical details of how developers create, check and deliver the items on the backlog are left to the team to decide.

  • Lean Software Development and related ideas like Kanban derive from the Toyota Production System and the Theory of Constraints. One of Toyota's six rules for kanban to be effective is "Do not send defective products to the subsequent process". Despite this, I can find little about the use of formal inspections in these software development derivatives. According to posts on kanban discussion groups, kanban, like Scrum, is neutral when it comes to particular engineering practices. Nevertheless, the focus in these approaches is on limiting work in progress, eliminating waste, and identifying and removing bottlenecks in the process.
In contrast to eXtreme Programming, Scrum, Lean, and kanban, Jeff De Luca's less famous, less hyped Feature-Driven Development (FDD) insists on formal design and code inspections, adapting them and embedding them within a process that remains dynamic, highly iterative, and client-valued-requirement-driven.

Given this, the question of whether or not to perform formal inspections is answered no in the case of XP and yes in the case of FDD. For traditional teams, the answer is almost certainly yes given the evidence of previous generations of software development. Therefore, it seems to do inspections or not remains a question only for teams following Scrum, Lean, Kanban, etc, or some home-blended, mixture of all of these.

To answer this question for agile approaches in general, we look first at traditional inspections to understand what we might adapt, then at how FDD has adapted inspections, compare inspections with pair-programming and, finally, suggest some ideas for  introducing inspections in other agile  approaches.

Traditional Inspections

Obviously, the question is not as simple whether to do formal inspections or not. There is the question of whether a team is able to perform inspections well or not. Doing inspections badly is no good to anyone, any more than practicing pair-programming or collective ownership poorly is useful.

The definitive book on doing inspections well is generally regarded to be Tom Gilb and Dorothy Graham's, Software Inspection. This builds on pioneering work by Michael E. Fagan at IBM. Published in 1993, the book reads as quite dated now, especially the examples and case studies. Nevertheless, we can still examine the underlying principles, techniques and strategies described for performing effective inspections in a traditional development environment. Once a team understands these, it is in a far better position to compare and contrast inspections with pair programming as a means of assuring quality, and how they might adapt inspections to fit within the specific ways they work, agile or not.

Purpose and Benefits

The primary purpose of a formal design and code inspection is the removal of defects from the items being inspected. The idea is to identify and remove defects as early as possible because it costs significantly more to identify and remove defects later in a project. This is no different than the idea of customers 'inspecting' and giving feedback on completed work at the end of each development iteration (sprint) in Scrum. In both cases, the cost of inspecting is assumed to be significantly less than the cost of identifying and removing defects later rather than earlier. This assumption must prove to be true. Otherwise inspections are bot worthwhile.

In addition to defect removal, formal inspections should, if done well, provide a number of other benefits (see figure 1):

Inspection Outputs

Figure 1: Inspection outputs

1. Knowledge Transfer

Examining the designs and code of experienced, knowledgeable developers, having them walkthrough them explaining the techniques they use, enables less experienced developers to learn from them. Similarly, those new to the organization, or maybe just new to the project team, quickly learn how things are done in that organization or team. Inspection leaders know to look for and facilitate this knowledge transfer during an inspection but only in as far as it does not distract from the primary purpose of removing defects. Inspections are not a vehicle for senior developers to wax lyrically for hours about their experience. Inspections are about improving quality primarily through identification and removal of defects and secondarily by increasing the knowledge and skill of developers.

2. Process and Guideline Improvement

As well as listing defects, reviewers in an inspection may make suggestions including things like:

  • adding items to checklists of things to do before inspections, before checking code into a source control or configuration management repository, or when defining unit or integration tests.
  • improvement of configuration properties on compilers, in an integrated development environment (IDE), or static code analysis tool to detect  simple defects automatically and enabling inspections to concentrate on more major issues.
  • changes to design and coding standards used by the team or even the organization as a whole.

3. Requirement Additions and Updates

Any kind of habitual review, walkthrough or inspection scheme is likely, at some point, to discover some bad assumptions made about requirements. The result is new or updates to existing requirements. In traditional development, such assumptions may otherwise only surface at the end of the project during system test, or not until after the software has been released and deployed into production.

4. Quality Metrics

If we capture and examine metrics on the type and number of defects found, common problem areas reveal themselves. Once these problem areas are known, this can be fed back to developers and addressed. Additionally, the metrics can prove useful in convincing management of the benefit of the inspection process, and in further tailoring it to be more effective. Alternatively, they can prove that the current inspection scheme is not effective and either should be dropped altogether or changed to improved it.

Agile processes generally approve of, if not insist on, knowledge sharing, continuous improvement of the process, and adapting to changes in requirements. Metric collection is, therefore,the only secondary benefit of inspections that might be considered irrelevant in to an agile project. Nevertheless, the use to which the metrics are put, process improvement and ensuring a project activity is worthwhile, fit comfortably within agile thinking.  Of course, this assumes all this can be done without drowning the team in paperwork seriously from the frequent delivery of tangible, working software.

The Inspection Process

Tom Gilb's book explains in great detail the process of running formal inspections within a traditional software development process. The book contains numerous procedures, checklists, rules, and other related paraphernalia. These are likely to turn-off those used to the scantily-clad process descriptions of agile approaches. What agile teams like is not mountains of documents and procedures that obscure the underlying process but something stripped down to the bare essentials that can be described in one or two pages. In my experience, developers on traditional teams prefer this too.

Traditional inspections inspect documents. Everything to be inspected, requirements, design, code, tests, etc, must be presented as a formal document. This is not always the case in traditional development environments, and far less likely in many agile approaches. This is one area where traditional inspections need to be adapted to fit an agile approach.

In addition, agile approaches are always highly iterative in nature. This will inevitably make inspections look a little different in these environments, even if, in practice, the differences are small tweaks in emphasis, level of formality, and terminology. The fundamentals obviously need to remain the same if the same benefits are to be derived.

Figure 2 highlights the fundamental aspects of inspections. Yes, it shows a process and might immediately turn off some agilists. However, processes, like tools, are not evil as long as they serve to facilitate and guide 'individuals and interactions', rather than inhibit them. Figure 2 splits inspections into four activities (the pinks) with key inputs and outputs (the greens).  Considering each of the pinks in turn:

Inspection Process

Figure 2: Inspection 'Process'


Preparing for an inspection involves:
  • identifying an organiser or leader for the inspection,
  • checking the items to be inspected are ready for inspection,
  • splitting large items into manageable portions,
  • identifying and assigning reviewers,
  • scheduling a meeting for the inspection.

The Inspection Leader

In a traditional environment, trained inspection leaders organise and facilitate inspections at the request of the authors of items needing inspection.

The Entry Checklist

Once a leader is identified for a particular inspection, their first task is to ensure that the item being inspected is ready. The easiest way to do this consistently is to agree and maintain a checklist of entry criteria. Authors are expected to have compared their work against the checklist before asking for an inspection. If the items to be inspected fail any of checks in the checklist, the authors must correct the problem before asking for an inspection again. Few things are more frustrating than discovering the authors of items being inspected have not done the basic work needed to make the inspection worthwhile. Without agreed entry criteria, it is all too tempting for some authors to submit items too early, expecting team-mates to do their thinking and basic checking for them.

The inspection leader prints and signs a more formal checklist page to prove that the entry criteria were all fulfilled by the items to be inspected. The signed checklist forms part of the formal output of the inspection. Alternatively, this can be done on-line using a purpose-built or suitable general-purpose application.

Code Inspection Entry Checklist For Project X

Inspection of:

1. Does the code compiles without errors and any unnecessary warnings?

[   ]
2. Have all the required unit tests been written and do they pass without reporting any problems?

[   ]
3. Does the code pass all the  agreed static analysis tool checks?

[   ]
4. Does the code look like it has been commented sufficiently and Javadoc or equivalent run without reporting errors?

[   ]
5. Has the code been formatted using the agreed tool and agreed settings so that it is easier to read because the format is familiar?

[   ]
6. all the code and related files have been checked into the team's source control / configuration management repository?

[   ]

Checked By:

Figure 3: Formal Inspection Entry Checklist Example


Two hours inspecting something is usually more than enough time to spend in one go. Any longer and fatigue sets in rapidly reducing the effectiveness of the inspection. Therefore, an inspection leader must assess whether an item is too big or complex to inspect in one sitting. With traditional development, items are quite often too large to be inspected in one go and both design and code inspections frequently need to be split into digestible pieces.

Reviewers and Roles

An inspection leader must also decide who to involve in an inspection and what specific role if any each attendee needs to play. Forming an inspection team can become a significant significant task. In some cases it can be difficult to secure the time and enthusiasm of the particular people who need to be involved.

An inspection leader arranges for the items needing inspection and any supporting documents to be distributed to the reviewers in the format agreed by the team. For example, if source code is being printed for inspection, it is printed in the agreed format.

Reviewers may be left to check the items needing inspection in their own way. Alternatively, the inspection leader may ask each reviewer to cover specific areas of the items, inspecting them from a particular perspective. This is often far more effective than simply leaving reviewers to their own devices, especially those with little inexperience of inspections or those under time pressure from other tasks. Tom Gilb's book suggests a number of perspectives to inspect from such as testing, usability, architectural, maintainability, standards compliance, or even simply checking from the back of a document forwards.

Scheduling a Meeting

Finally, the inspection leader needs to schedule the inspection meeting. Remember to book a suitable room in advance ensuring the room has a white board available. Scheduling an inspection before lunch or before home time increases the likelihood of finishing on time. Inspections held immediately after lunch are not often a great idea for obvious reasons. In addition, enough time must be given to reviewers to check the materials needing inspection before the meeting is held. Often this means giving the reviewers at least a full working day in which to do so to fit the task around other work.


To be able to examine the effectiveness of inspections, it is important to record the size (pages of source code, number of design diagrams, etc) of the items under inspection.


Before attending the inspection meeting, each reviewer is expected to work through the items being inspected. Reviewers take notes of any issues they spot and suggestions and questions that come to mind. Reviewers should work alone and according to any specific roles and tasks assigned to them by the organiser of the inspection.

The easiest way for reviewers to note issues, suggestions and questions is to scribble them in place on a printed copy of the item being inspected. For code inspections, this is obviously a printout of the source code. While this works for most people, reviewers are free to choose whatever means they prefer of noting down their observations, as long as it is time and cost effective.

Reviewers need to take reviewing items seriously, scheduling time for this work in their personal calendar/work plan. Rushing the checking of the design or code over lunch or trying to do it on a crowded bus or train at the end of a long day is not a good idea. Having tried on a few occasions to inspect code on the top deck of a double-decker bus on the commute home during the Singapore rush-hour, I don't recommend it. Tired eyes squeezing in inspection work into an already over-busy day can significantly reduce their effectiveness.

Design Inspections Checking

For design inspections, reviewers are looking for:

  • discrepancies between requirements and the design

  • problems in the design e.g. inconsistencies, gaps, overly complicated areas, etc.

  • compliance with agreed design patterns and standards

Code Inspection Checking

Code inspections are similar with reviewers looking for things like:

  • differences between agreed design and implementation

  • logical inconsistencies, omissions, premature optimisations, unintended null references, etc

  • overly-clever or complicated code. Unless genuinely needed for performance purposes, it demonstrates more skill to code something simply and clearly than to produce 'clever' or complicated code. Long chains of Linq statements generally fall into this category in .Net code, for example.

  • badly commented code, etc. If the reviewers in the inspection cannot understand a piece of code without a comment, or an existing comment in the code, it is definitely a problem worth fixing because it guarantees no-one will understand it in a few months time.

  • bad inefficiencies. The opposite of premature optimisation, examples such as unnecessarily opening and closing files in each iteration of a loop, appending to immutable character strings in loops in languages like Java and C# that provide more efficient StringBuffer/StingBuilder alternatives, etc.

  •  inappropriate use of language  features, etc. Some developers will try and find any excuse to weave in the need to use a cool new language or technology feature. On one project afflicted in this way, I announced a competition for the first developer to use every new feature in a new edition of the Java programming language. The prize was ... the sack!

Reviewer Checklists

It’s amazing how creating a simple checklist of things to look for can improve the effectiveness of a reviewer.  A checklist that starts as simple as something like:

  • Transactions:
    • scope,
    • propagation,
    • rollback scenarios
  • Security:
    • authentication,
    • authorisation,
    • auditing
  • Persistence:
    • format,
    • lazy-loading
  • Exception handling
  • Event Handling
  • Logging and tracing
  • Caching:
    • read or read/write,
    • expiration,
    • refresh
  • Testing:
    • positive test scenarios
    • negative test scenarios
  • Configuration options

really does help reviewers cover the ground that they need to when checking designs. This is especially true if the team work together to create the initial list.

I have also found that if the items on list are kept concise, the team quickly starts to remember most of the items on list, especially the ones that are truly useful to them. After a while  many of the items can be removed from the list because they are checked by habit. This makes room for new items to be added to the list arising from issues found and suggestions made during inspections, testing and customer demonstrations.

Design inspection checklists tend to be harder to create than code inspection checklists. Code inspection checklists are typically derived from the headings in existing coding standards, ignoring the items that can be checked automatically by compilers, static code analysis tools, and tools like javadoc that generate documentation from source code comments. In contrast, too many design review checklists are over-wordy because the authors cannot resist including explanations of why items are on the list. The list is a memory aid and an organising construct, not an instruction manual or pedagogical essay. One way to avoid this is to think of a template for a traditional design specification document that has instructions for what to put in each section. Then consider what the table of contents for that document might look like and use that as the starting point for the check-list.


To be able to examine the effectiveness of inspections, it is important to record the time taken by each reviewer to check the items under inspection.


Called the logging meeting in the Software Inspection book, all the reviewers in the inspection meet together to review, agree on, and record their observations. The idea is to log all the items found with the minimal amount of discussion. Three types of item are logged:

  • issues - potential problems with the design or code
  • questions - items you don't understand, are ambiguous, or are not clear
  • suggestions - possible improvements to either the design or code, or to process

The meeting usually proceeds page by page with reviewers volunteering any items they have noted on those pages. One person in the meeting, nominated as the scribe either before the meeting by the inspection organiser or  by general consent at the beginning of the meeting, records all the items in a simple list. Typically this is done on paper and typed up after the meeting if an electronic copy is useful. Figure 4 shows an example of a typical list of items from a code inspection meeting. Each item:

  • is numbered,
  • has the line number in the file where the item was found,
  • an indication of whether the item is an issue, question or suggestion
  • brief description
  • an indication of the severity of the issue, either high or low
  • a place to fill in after the meeting when the item has been addressed

Code Inspection Item List Project X

Inspection of:
Date: Page:

Item #
Line #
Type (I/Q/S)
Description of Item
Severity (H/L) Corrected

 Person Class

[   ]
  Null parameter check needed
[   ]
 Doc for InvalidDateException missing in comment
[   ]

   Add running api doc gen to inspection entry checklist

[   ]

 Loan Class

[   ]
  Can percentage value supplied by user be greater than 100%?

[   ]
 Possible divide by zero in calc
[   ]

Figure 4: Inspection Meeting Item List Example

Instead of filling in a form, the items can be noted on the authors copy of the items being inspected. While this is slightly faster and easier to do in the meeting, it is also easier to miss addressing a particular item afterwards and less easy to:

  • double check that all items have been addressed
  • record how many items of what type were found
  • how many issues found were high and how many low severity
While it is important not to try and resolve issues within the meeting, it is as important to allow the team time to explore potential major issues. The meeting is intended to be more than a robotic merging into a single list of the items found by each reviewer. The meeting is also intended to uncover further defects, communicate ideas, and transfer knowledge. For this to happen, the meeting needs to be facilitated well. This is one of the inspection leader's main responsibilities.

It is imperative that reviewers do actually check the material before the inspection meeting. If the inspection leader finds that more than one of the reviewers has failed to check the code/design, he or she should postpone the inspection. The reviewers can then use the meeting time to check the material in preparation for the rescheduled inspection. Do not let the reviewers off the hook, as this will start a vicious cycle that will seriously undermine the usefulness of inspections.

It is important to concentrate on defect detection and not defect resolution because the right people and information may not be in the room to devise the best resolution. Designing and implementing the resolution is left to the author to do after the meeting.  In addition to discussions about resolving issues, discussions in the inspection meeting about improving processes, checklists, etc, should be stopped, noted and discussed in a follow up session.

To be effective, inspections should to be done in a way that is not threatening. Few developers like to be told that something they have sweated over for hours is wrong or could have been done better. Setting the inspection culture is key:
  •  Everyone needs to view them firstly as a great debugging tool and secondly as a really good opportunity to learn from each other.

  • Developers also need to understand that inspections are not a personal performance review. If a developer feels their performance is under scrutiny, they are far more likely to take an aggressively defensive position. For this reason project and line managers should not be invited to inspection meetings. Project and line managers should only be involved in peer inspections of management products.

  • Before the first inspection meeting, agree as a team to some basic guidelines for behaviour and communication, if they do not exist already ( see CoadLetter #40 for more on this ).

Finally, for authors in a code inspection, do not turn up with a different version of the code to that which was distributed for the inspection. Reviewers will have made notes on their copies and a good deal of time will be wasted trying to ensure everyone is on the same page at the same time.

The Result

At the end of an inspection meeting, the inspection team need to decide on an overall assessment. There are four possibilities:

  1. The inspected items are accepted as they are without any need for change. While certainly possible, in practice, it is improbable. An inspection normally finds something that can be improved.

  2. The inspected items are accepted with minor changes. Either the authors are simply trusted to make these improvements or they make the changes and the inspection leader verifies that the changes have been made. Another formal inspection is not needed in this case. In my experience, once a team has got the hang of doing inspections, this becomes the most likely outcome. However, inspection leaders should beware it becoming the defacto outcome.

  3. The inspected items need major changes. In this case, the authors need to go away and rework significant parts of the material. Once it has been completed, the amount of corrective work requires another formal inspection. This is the second most likely outcome but is not nearly as popular as the 'accept with minor changes' because it means repeating the inspection.

  4. The inspected items need completely reworking. The quality of the work is so poor or the assumptions the work is based on are so badly wrong that the work must be totally redone. This is a rare case but not as rare as 'accepted as is' especially when a team has just started doing inspections. In most cases, the entry checklist guards against inspecting work of such poor quality that it needs this outcome. Bad assumptions about requirements or the way a particular piece of technology works are more likely to be the cause of this result. Obviously, once redone the work needs to be inspected again.

The overall result is recorded on a cover sheet for the inspection together with the usual sort of information captured for normal meeting minutes including items such as:

  • What the meeting was for. In other words, what was inspected.

  • Who attended.

  • Location. This is useful as an aid to memory if rooms vary widely. Remembering what room an inspection was in can trigger more memories of what was discussed at the time. If the inspection meetings always take place in the same room, or always in one of two or three specific rooms, this benefit is lost and the location can be omitted.

  • The date of the inspection meeting.

  • Start and end time. These are important for measuring the effectiveness of inspections.

  • Signatures of those who attended. These are only really needed if required by management or for legal reasons. Initially, asking each person to sign does emphasize that the team are taking inspections seriously, both to management and to the individual members of the development team taking part. Once established, most developers can see for themselves that inspections are worthwhile, and statistics derived from the metrics collected are better in communicating their effectiveness to managers.


The authors of material inspected are responsible for investigating and addressing all the issues and questions logged in the inspection meeting, working with others to resolve major issues as necessary. The authors may request the inspection leader to convene follow-up meetings to brainstorm some of the suggestions and more complex issues raised. Not all the issues logged will turn out to be defects. The inspection meeting does not always have the required knowledge available in the room to confirm if an issue is definitely a defect or not. Sometimes, on investigation, a logged item turns out to be a non-issue.

Once, the authors have addressed all the issues and questions logged, the inspection leader either arranges another inspection or confirms that all logged items have been addressed adequately depending on the overall result of the inspection meeting.

In addition, the inspection leader ensures that:

  •  the metrics collected during the inspection are recorded with those from previous inspections and summary statistics updated

  • suggestions for process improvements are followed up, arranging brainstorming meetings if necessary to work through the suggestions.

Inspections in Feature-Driven Development

Feature-Driven Development (FDD)  is designed with inspections at its core, having adapted them to fit seamlessly within its processes. The FDD process descriptions, however, assume familiarity with the fundamentals of conducting software inspections. FDD process descriptions are not intended as instructional guides or tutorials. They are intended more as memory aids. For example, certain sections of Tom Gilb's book were required reading for Chief Programmers on the first FDD project. After reading and understanding those, it required only a little guidance on when and how to fit inspections into the overall process for the team to establish the practice.

In FDD, small groups of features are taken through a set of milestones, including a design inspection milestone and a code inspection milestone, by a  small team of developers called a feature team led by a Chief Programmer.

The Inspection Leader in FDD

The inspection leader role is performed by the Chief Programmer.

Inspection Entry Checklists in FDD

FDD does not insist on a formal entry checklist for inspections. Ultimately, the Chief Programmer is responsible for deciding when the features being worked on are ready for design or code inspection.

The checklist for items might consist of no more than half a dozen items, scribbled on large piece of paper and pinned up where all team members can see it. Alternatively, the same list might be presented in a simple web page on the project's intranet site. Figure 5 shows an example of items that might comprise an entry checklist for a code inspection.

Code Inspection Entry Checklist

Figure 5: Informal Code Inspection Entry Checklist Example

Chunking in FDD

One of the big differences with agile development is that development is done iteratively in small slices. As a result, items from agile development processes are very rarely so big. In my experience, few design or code inspections in an FDD project require more than an hour of checking, or more than an hour-long inspection meeting. Nevertheless, on occasion, complex items may still need to be inspected in two or three manageable pieces instead of one big one.

Inspection Reviewers and Roles in FDD

FDD generally makes working out who to invite easy because FDD requires all a feature team's members to inspect each other's design and code.

Other members of the wider development team are invited at the Chief Programmer's discretion for particularly significant or complex features. In FDD, if a design is going to impact the work of other feature teams, the Chief programmer is expected to widen the design inspection to include other Chief Programmers. Scenarios where this might be needed include:

  • The feature team is advocating a new standard idiom for doing things.

  • The feature team is advocating the refactoring of previously completed features to simplify the design of their features.

  • The feature team is advocating the addition of a significant new class or classes to the object model.

  • The feature team is advocating a significant change to the object model.

  • The features are of sufficient complexity that the Chief Programmer wants some more experienced eyes to check the design before proceeding.
The bottom line is, that a significant change requires a significant design inspection.

In my experience, for a team writing object-oriented code and associated JUnit-style unit tests, it is useful to ask each reviewer in a code inspection to concentrate on:

  • a subset of the operations involved, following through the flow of calls to check that the functionality is correct
  • and/or a subset of the classes or interfaces involved, looking for inconsistencies across the class or interface
  • and/or the code for a subset of the unit tests if the team finds it needs to inspect test code because the team is new to writing effective unit tests in this way.
  • and so on ...

It is important to spread similar or related pieces across reviewers. This way one reviewer's comments during the inspection meeting can trigger another reviewer to check for the same issue in the pieces they were assigned.For design reviews that requires some sort of printed version of the design for the feature/user story/use case/back log item being worked on, even if it is no more than a photo of a diagram sketched on a white board. 

In all other aspects, the preparation for an inspection in FDD is not that different from a traditional inspection.

Inspection Checking in FDD

In general, members of the feature team inspect each others work. Because the feature team consists of the owners of the classes needed to implement the features in each iteration, the team usually has all the expertise it needs to perform an effective inspection.

Design Inspections Checking in FDD

The team checks the sequence diagrams, class diagram updates and other design notes agreed during the team's collaborative design session and domain walkthrough for the features involved in this iteration. These are generally written up in some electronic form and posted on the project's intranet so the design inspection ensures that this has been done without error.

It is tempting to devise a standard template for publishing designs. There is nothing intrinsically wrong with having a standard template and it can often speed up the process. The problem comes when the team spend more time during a design inspection pointing out non-compliance in things like the fonts, paragraph and heading styles used, trivial grammatical correctness, and the lack of introductory content than in the design being described. Yes, a design document needs to communicate design adequately, but it is far more important to get the design right. Correctly formatting a design document to effectively communicate a poor or broken design is a waste of time. This is probably the biggest difference from traditional inspections where the rules for constructing and formatting the document are considered important.

There are times when reviewing the actual document is required; when the document standards or templates are set by regulators or by a formal contract. However, that is a document inspection not a design inspection. Beware confusing the two. A design inspection is far more important than a document inspection, even when the document inspection is truly needed.

Checking Code Inspections in FDD

In my experience, the easiest way to check source code is to print it out, single-sided, landscape-oriented with two pages of source to one printed page. Marking up issues, questions and suggestions electronically is slower than scribbling them onto printouts. As tablet and pen-input or stylus -input devices improve, this might change.

Reviewer Checklists in FDD

Maintaining informal checklists of items to look for when checking designs or code is as useful in FDD as it is in traditional inspection scenarios, and noting the time spent checking is still useful.

Inspection Meetings in FDD

In FDD, the Chief Programmer (CP) generally fulfills the facilitation role for the inspection meeting but may defer to another CP if they feel that they cannot objectively play that role in a particular inspection, one where maybe the design revolves around a particularly favourite design pattern or cool idea by the CP. The idea that the whole team are on the hot seat together, inspecting each other's contributions to the design and coding of the features the team needs to complete, removes most, if not all, of any intensity, anxiety or intrepidation that some developers feel when their work is inspected by peers in traditional inspection settings. Features are delivered by the team, not individual members of the team. This fact helps team members focus on inspections as a mechanism for detecting defects and not as some sort of personal assessment.

In most other aspects, inspection meetings in FDD are similar to their traditional counterparts.

Updating in FDD

In FDD, the feature team are the authors and so work together to investigate and correct the issues found as necessary. The collection, recording and summarizing of metrics from inspections are left for each individual project to define as suits its working environment.


The small team and chief programmer oriented structure of FDD, complements formal inspections beautifully. In fact, the mix of feature teams and inspections adds a new dimension to traditional inspections.While formal case studies do not exist, personal experience would indicate that FDD's feature teams and short, structured iterations make inspections easier to establish and perform effectively within FDD than in traditional environments.

With Chief Programmers controlling the level of formality of each inspection as needed, wasted overhead from unnecessarily formal inspections for straight-forward designs and code is largely eliminated. Of course, this relies on the availability of experienced and skilled developers in the Chief Programmer role but this is the case for any aspect of an FDD project. In contrast, traditional inspections rely on the availability of trained, expert inspection leaders, a more specialist role than an FDD Chief Programmer.

Inspections and Pair-Programming

Pair-programming is defined as two developers working together at a computer. All code on the project is written in this way. The idea is that all design and code is checked as it is written with many errors and defects detected and removed as soon as they are introduced. Comparing the costs and benefits of pair-programming and inspections we have:

Defect Detection and Removal

It is very hard to formally measure the effectiveness of pair-programming in detecting and removing defects because it is impossible to determine which of the defects would be seen by a developer working alone and which would not. The only measure of effectiveness of pair-programming in defect removal is counting how many defects are found after the pair have completed the work. No mechanism exists in eXterme Progrmaming for collecting such a count. Collective ownership ensures that any defects found in subsequent work are fixed as they are found. No measure is taken of the number of defects found or fixed in this way.

In an agile environment, bad assumptions maybe caught by pair programming but not if both authors are labouring under the same misconception (or one convinces the other that the assumption is good). In this case, it is unlikely that unit testing will catch the problem because the author/s are likely to build the assumption into the logic of their unit tests. Similarly, a demonstration at the customer of the iteration may not reveal the underlying bad assumption because the demo may not cover a scenario where it is visible. Only inspections and pair-programming look at the source code, unit tests and demonstrations do not.

One of the nice things about inspections is they provide a pause, time to think, and the opportunity for fresh eyes to look at the design or code. Pair-programming does not. While inspections take a step back and consider the design or source code as a whole, pair-programming concentrates more on what is being worked on right at that moment. In addition, inspections provide a welcome hour or so away from the computer screen assuming the common practice of printing designs and source code for inspection. This respite can help reduce the errors made and time spent sorting out problems caused by developer fatigue from too long spent at the keyboard.

Knowledge Transfer

Pair programming transfers knowledge on a one-to-one basis. An inspection broadcasts knowledge among a larger audience of reviewers. In FDD, this audience is at least the full feature team if not wider. The transfer of knowledge in inspections is not only wider than that of pair -programming but qualified, and verified because a Chief Programmer is present to ensure the techniques learned are good. A pair of developers can teach each other bad habits just as easily as good habits.

Process Improvement

Pair programming does nothing formally to improve process. Informally, discussion between pairs of developers as the work together will inevitably result in process improvement suggestions. Inspections provide a slightly more structured means of raising and evaluating such suggestions, including changes to inspection entry and reviewer checklist contents. Most agile processes provide channels for process improvement suggestions to be raised and discussed. On the whole, frequent process improvement is emphasised and practiced more within agile approaches than traditional approaches to software development regardless of whether inspections or pair-programming is used.

Requirement Additions and Updates

Inspections feedback into requirements whether you define them traditionally, as use cases, user stories, product backlog items, or features. Pair-programming is likely to raise some questions about requirements but less formally than inspections. It would seem intuitive that a pair of developers is less likely to find holes in requirements than a feature team or inspection team of three to six people examining working with the same material. However, as far as I know, this has not been formally studied in any depth, and it is not obvious that benefit of the extra requirements changes/additions identified outweighs the cost of the extra people examining the material.

Quality Metrics

The dynamic and continuous nature of pair-programming makes it very hard to collect any sort of metric without setting up an artificial experiment for exactly that purpose. Basic quality metrics are relatively easy to collect during inspections.


The cost of doing inspections is easy to measure. The time spent by the inspection leader, each reviewer's individual checking and the inspection meeting are easy to record. For a feature team, this is certainly a significant amount of time. In pair-programming, it is even easier to measure because all coding is done in pairs. Everything has the overhead of the second developer in the pair. Studies to determine the difference in cost between continuous pair-programming and well-run formal inspections do not, to my knowledge, exist. 

Even if such studies existed, they would also need to account for occasional 'pair-programming' within FDD feature teams. Members of feature teams in FDD are free to pair up during coding when this is desirable. It is not unusual to see two members of a feature team working together where care is needed or where one is struggling with a particular problem. One of the great things about feature teams is that a feature is complete only when the team is finished not when any one individual is finished; it is in the team members' own interests to help each other.

Inspections in Scrum, Lean and Kanban?

What would formal inspections look like on a Scrum, Lean or Kanban project? These development teams are frequently very small with two to six developers, occasionally as many as eight or nine. This means the often inspections would need to involve at least half the team, and for the smallest teams each inspection would involve the whole team.

The inspection leader role could be rotated among the team or, if the team has a coach or scrum master with enough time, they could play the facilitating aspects of this role instead. It is definitely worth asking for some coaching from someone experienced in holding inspections if the team has not done inspections before.

For Scrum, Lean and Kanban, I see no reason why inspection entry checklists would not be handled in a similar way to they are in FDD projects. Chunking is similar.

Who to invite to an inspection is easily solved for smaller teams because the limited size of the team means usually everyone plays a role in each inspection. For a slightly larger team, one or two developers could be left out of a particular inspection if they are short of time on some other task. One would have to beware, however, of letting the same people off each time.

The rest of the mechanics of operating inspections on these projects are likely to be very similar those used in FDD. The only other question is when to perform inspections within these processes. Scrum teams will have to decide for themselves exactly when inspections are done. They should, of course be completed within the current sprint. Work should not be considered done unless it has been inspected or the inspection waived by consensus within the team.

Kanban provides the added dimension of workstations on the kanban board where entry to one workstation from the previous one is controlled by the amount of work in progress in that work station. Inspections can be added to particular workstations so that not only entry to workstations are controlled but the exit is controlled by passing the appropriate type of inspection. Alternatively, like FDD, a workstation could represent the performing of a particular type of inspection: requirements, design or code, etc.


While eXtreme Programming teams might eschew formal inspections for the more dynamic and informal practice of pair-programming, there appears to be no solid reasons for not employing formal inspections within agile processes assuming that they can be run well. Agile teams that  reject their use and do not employ any other form of internal quality check on designs and code may find they have missed a trick.

Various studies have shown that in addition to being more effective at catching errors than testing, reviews find different kinds of errors than testing does ... Thus, even when testing is done effectively, reviews are needed for a comprehensive quality program.
Steve McConnell, Code Complete: A Practical Handbook of Software Construction, 2007


What do we actually mean when we talk about software as being of high quality. How do we ensure we develop high quality software?

The Unit In Unit Testing

The changing definition of unit in unit testing and what that means for test-writing developers.

False Security and Over Confidence from Automated Testing

Automated tests are a good thing unless they are lulling you into a false sense of security. In fact, only good automated tests are a good thing. Bad automated tests are harmful to a project's health.

Follow me on Twitter...