Like all good software development processes, Feature Driven
Development is built around a core set of ‘best
practices’. The chosen practices are not new but this
particular blend of the ingredients is new. Each practice
compliments and reinforces the others. The result is a whole
greater than the sum of its parts and no single practice underpins the
A team could choose to implement just
one or two of the practices and might derive significant benefit from
doing so. However, they would miss out on the full benefit
that occurs by using the whole FDD process. For exanmple, code
inspections, walkthroughs and peer reviews have been recommeded and
practised for decades old. A
mountain of evidence exists to show how effecteive they can be.
However, in my personal experience they work much better with feature
teams than in more traditional team organisation.
Among the best practices that make up FDD are:
- Domain Object Modelling
- Developing by Feature
- Individual Class (Code) Ownership
- Feature Teams
- Regular Builds
- Configuration Management.
- Reporting /
Visibility of Results
Domain object modelling consists of building class diagrams depicting the significant types of object within a problem domain and the relationships between them. Class diagrams are structural in nature and look a little like the more traditional entity-relationship diagrams of the relational database world. Two big differences are the inclusion of inheritance or generalisation/specialisation relationships and operations that specify how the objects behave. To support this behavioural view, it is usual to complement the class diagrams with a set of high-level sequence diagrams depicting explicitly how objects interact with each other to fulfil their responsibilities. The emphasis is on what questions objects of a particular class can answer and what calculations or services they can perform; there is less emphasis placed on determining exactly what attributes objects of a particular class might manage.
As analysts and developers learn of requirements from domain experts, they start forming mental images of the desired system. Unless they are very careful they make assumptions about this imaginary design. These hidden assumptions can cause inconsistencies between different peoples’ work, ambiguities in requirements documentation and the omission of important details. Developing an overall domain object model forces those assumptions out into the open, misunderstandings are resolved, holes in understanding are filled and a much more complete, common understanding of the problem domain is formed.
In Extreme Programming Explained, Kent Beck offers the analogy that software construction is like driving a car [Beck 00]. Driving requires continual small course adjustments using the wheel; you cannot simply point a car in the right direction and press the accelerator. Software construction, Kent says, is similar. Extending that analogy a bit further, a domain object model is like the road map that guides the journey; with it, you can reach your destination relatively quickly and easily without too many detours or a lot of backtracking; without it you can very quickly end up lost or driving around in circles, continually reworking and refactoring the same pieces of code.
The domain object model provides an overall framework to which to add function, feature by feature. It helps maintain the conceptual integrity of the system. Using it to guide them, feature teams produce better initial designs for each group of features. This reduces the amount of times a team has to refactor their classes to add a new feature.
Domain object modelling is a form of object decomposition. The problem is broken down into the significant objects involved. The design and implementation of each object or class identified in the model is a smaller problem to solve. When the completed classes are combined, they form the solution to the larger problem.
Modelling (‘modeling in color’) in colour uses four colour-coded class archetypes that interact in generally predictable ways. The use of colour adds a layer of ‘visually detectable’ information to the model. Using this technique, a team or individual can very rapidly build a resilient, flexible, and extensible object model for a problem domain that communicates clearly and concisely FDD does not mandate the use of ‘modeling in color’ and modelling in colour does not require the use of FDD. They simply compliment each other exceptionally well.
The Domain Object Model provides a solid framework that can be
built within when changes in the business environment require the
system to change. It allows designers to add new features and
capabilities to the system correctly; it greatly enhances the
internal quality and robustness of the system.
Once we have identified the classes in our domain object model, we can design and implement each one in turn. Then once we have completed a set of classes, we integrate them and ‘hey presto!’ we have part of our system. Easy! … Well, it’s a nice dream!
Non-trivial projects run in this way have found that they end up delivering a system that does not do what the client requires. Also classes in these systems are often overcomplicated containing methods and attributes that are never used while missing methods and attribute that are needed. We can produce the most elegant domain object model possible but if it does not help us provide the system’s clients with the functionality for which they have asked then we have failed. It would be like building a fantastic office skyscraper but either leaving each floor unfurnished, uncarpeted, and without staff, or furnishing it with ornamental but impractical furniture and untrained staff.
A key element in any project is some statement of purpose, problem statement, or list of goals or very high-level requirements describing what the system needs to do. Without this there is no reason for the project to exist. This is the functionality that the system must provide for the project to be considered a success.
Every popular method or process contains some form of functional decomposition activity that breaks down this high level statement into more manageable problems. Functional specification documents, use case models and use case descriptions, user stories and features all represent functional requirements and each representation has its own advantages and disadvantages.
Traditionally, we have taken the statement of purpose and broken it down into a number of smaller problems and defined a set of subsystems (or modules) to solve those smaller problems. Then for each subsystem we have broken its problem into a hierarchical list of functional requirements. When we have requirements granular enough that we know how to design and implement each of them, then we can stop decomposing the problem. We then start designing and implementing each of our functional requirements. The project is driven and tracked by function; sets of functional requirements are given to developers to implement and their progress measured.
A major problem is that the functional requirements tend to mix user interface and data storage and network communication functions with business functions. The result is that developers often spend large amounts of time working on the technical features at the expense of the business features. A project that delivers a system with the greatest persistence mechanism but no business features is a failure.
A good solution to this problem is to restrict our lists of functional requirements to those of value to a user or client and ensure the requirements are phrased in language that the user or client can understand. We call these client-valued functions, features. Once the features for a system have been identified, they are used to drive and track development in FDD. Delivering a piece of infrastructure may be important even critical to the project but it is of no significance to the client because it has no intrinsic business value. Showing progress in terms of features completed is something that the client can understand and assign value to. They can also prioritise features in terms of significance to the business.
Interestingly, Extreme Programming records functional requirements as user stories on index cards. In Extreme Programming Explained, a user story was described as 'a name and a short paragraph describing the purpose of the story' [Beck 00]. A year later, in Planning Extreme Programming, a user story is 'nothing more than an agreement that the customer and developers will talk together about a feature' and a user story is 'a chunk of functionality that is of value to the customer’ [Beck 01].
The term ‘feature’ in FDD is very specific. A feature is a small, client valued function expressed in the form:
<action> <result> <object>
with the appropriate prepositions between the action, result and object
Features are small. They are small enough to be implemented within two weeks. Two weeks is the upper limit. Most features are small enough to be implemented in a few hours or days. However, features are more than just accessor methods that simply return or set the value of an attribute. Any function that is too complex to be implemented within two weeks is further decomposed into smaller functions until each sub-problem is small enough to be called a feature. Specifying the level of granularity helps avoid one of the problems frequently associated with use cases. Keeping features small also means clients see measurable progress on a frequent basis. This improves their confidence in the project and enables them to give valuable feedback early.
Features are client-valued. In a business system a feature maps to a step in some activity within a business process. In other systems a feature equates to some step in or option within a task being performed by a user.
Examples of features are:
- Calculate the total of a sale
- Assess the performance of a salesman
- Validate the password of a user
- Retrieve the balance of a bank account
- Authorise a credit card transaction of a card holder
- Perform a scheduled service on a car
Features are expressed in the form <action> <result> <object>. The explicit template provides some strong clues to the operations required in the system and the classes to which they should be applied. For example,
- ‘Calculate the total of a sale’ suggests a calculateTotal() operation in a Sale class
- ‘Assess the performance of a salesman’ suggests an assessPerformance() operation in a Salesman class
- ‘Determine the validity of the password of a user’ suggests a determinePasswordValidity() operation on a User class that can then be simplified into a validatePassword() operation on the User class.
The use of a natural language such as English means that the
technique is far from foolproof. However, after a little practice,
it becomes a powerful source of clues to use in discovering or
verifying operations and classes.
Class (code) ownership in a development process denotes who
(person or role) is ultimately responsible for the contents of a
class (piece of code).
There are two general schools of thought on the subject of code ownership. One view is that of individual ownership where distinct pieces or groupings of code are assigned a single owner. Every currently popular OO programming language uses the concept of a class to provide encapsulation; each class defines a single concept or type of entity. It therefore makes sense to make classes the smallest elements of code to which owners are assigned; code ownership becomes class ownership. This is the practice used within FDD; developers are assigned ownership of a set of classes from the domain object model.
Note: We assume that the readers are using a popular object-oriented programming language like Java, C++, Smalltalk, Eiffel, C#, etc. We therefore make the assumption that classes are the programming language mechanism providing encapsulation (also polymorphism and inheritance). Where this is not the case then the reader is requested to translate ‘class’ to whatever fundamental element provides information hiding, abstract typing or data encapsulation in your programming language.
The advantages of individual class ownership are many but include:
- An individual is assigned the responsibility for the conceptual integrity of that piece of code. As enhancements and new methods are added to the class, the owner will ensure that the purpose of the class is maintained and that the modifications fit properly.
- There is an expert available to explain how a particular piece of code works. This is especially important for complex or business critical classes.
- The code owner can implement an enhancement faster than another developer of similar ability who is unfamiliar with that piece of code.
- The code owner has something of his own that he can take pride in doing well.
The first classic problem with class ownership occurs when
developer A wants to make some changes to his or her classes but
those changes are dependent upon other changes being made in the
classes owned by developer B. Developer A could be required to wait
a significant amount of time if developer B is busy. Too many of
these situations would obviously slow down the pace of the
The second potential problem with individual class ownership that is often raised is that of risk of loss of knowledge about a class. If the owner of a set of classes should happen to leave the project suddenly for some reason it could take considerable time for the team to understand how that developer’s classes work. If the classes are significant, it could put the project schedule under pressure.
At the opposite end of the code ownership spectrum, is the view promoted by Extreme Programming proponents among others. In this world, all the developers in the team are responsible for all of the code. In other words, the team has collective-ownership of the source code.
Collective ownership solves the problem of having to wait for someone else to modify their code and can ease the risk of someone leaving because, at least in a small system, more than one person has worked on the code.
The main issue with collective ownership, however, is that in practice it can quickly degenerate into non-ownership or an ownership dictated by few dominant individuals on the team. Either nobody ends up being responsible for anything in the system or the dominant few try to do all the work because, in their opinion, they are the only competent members of the team. If nobody takes responsibility for ensuring the quality of a piece of code, it is highly unlikely that the resulting code will be of high quality. If a few dominant developers try to do everything, they may start off well but will soon find themselves overloaded and suffering from burn out. Obviously teams that encounter these problems struggle to continue to deliver frequent, tangible, working results.
Building a domain object model identifies the key classes in the
problem domain. The class ownership practice assigns those classes
to specific developers. We also know we want to build feature by
So how do we best organise our class owners to build the features?
We assigned classes to owners to ensure there was a single person responsible for the development of each class. We need to do the same for features. We need to assign each feature to an owner; somebody who is going to be responsible for ensuring that the feature is developed properly. The implementation of a feature is likely to involve more than one class and therefore more than one class owner. Therefore the feature owner is going to need to coordinate the efforts of multiple developers; a team leads job. Therefore, we pick some of our better developers, make them team leaders and assign sets of features to each of them (we can think of a team leader as having a ‘inbox’ of features that he or she is responsible to deliver).
Now that we have class owners and team leaders, lets form the development teams around these team leaders. Ah! We have a problem! How can we guarantee that all the class owners needed to code a particular feature will be in the same team? This is not an easy problem to solve.
We have four options:
2. We can allow teams to ask members of other teams to make changes to the code they own. However, now we are likely to be waiting for another developer in another team to make a change before we can complete our task. This is exactly the situation that led Extreme Programming to promote collective ownership.
3. We can drop class ownership and go with collective ownership and everything else that it requires to make it work. There is already a book in this series covering this option and anyway we know collective ownership does not scale easily.
4. We can change the team memberships whenever this situation occurs so that a team leader always has the class owners he or she needs to build a feature. This is the only realistic option that will allow us to both develop by features and have class owners.
Actually there is nothing that requires us to stick to a statically defined team structure. We can change to a more dynamic model. If we allow team leaders to form a new team for each feature they start to develop, they can pick the class owners they need for that feature. Once the feature is fully developed the team is disbanded and the team leader picks the class owners needed to form the team for the next feature. This can be repeated indefinitely until all the features required are developed.
This is a form of dynamic matrix management. Team leaders owning features, pick developers based on their expertise (in this case, class ownership) to work in the feature team developing those features involving their classes.
Every member of a feature team is responsible for playing their part in the success of the team. However, feature team leaders, as all good coaches know, are ultimately responsible for producing results. They own the features and they are accountable for their successful delivery. To play this team leader role well normally requires both ability and experience so we call our feature team leaders Chief Programmers in recognition of this and Harlan Mill’s work [Brooks].
Some things to note about feature teams:
2. By definition a feature team comprises of all the class owners who need to modify or enhance one of their classes as part of the development of a particular feature. In other words, the feature team owns all the code it needs to change for that feature. There is no waiting for members of other teams to change code. So we have code ownership and a sense of collective ownership too.
3. Each member of a feature team contributes to the design and implementation of a feature under the guidance of a skilled, experienced developer. Applying multiple minds to evaluate multiple options and select the design that fits best reduces the risk of reliance on key developers or owners of specific classes.
4. From time to time, a class owner may find themselves a member of multiple feature teams at the same time. This is not the norm but is not a problem either. While waiting for others in one feature team, a class owner can be working on stuff for another feature team. Most developers can handle belonging to two or even three features teams concurrently for a short period of time. More than that leads to problems switching context from one team to another. Chief programmers work together to resolve any problematic conflicts and to avoid overloading any particular developer.
5. Chief programmers are also class owners and take part in feature teams led by other chief programmers. This helps chief programmers work with each other, and keeps them close to the code (something most chief programmers like).
FDD relies heavily on inspections to ensure high quality of
designs and code. Many of us have sat through hours of boring,
backbiting, finger-pointing sessions that were called code reviews
or design review, or peer reviews and shudder at the thought of
another process that demands inspections. We have all heard
comments like this; “Technical inspections, reviews,
walkthroughs are a waste of time. They take too long, are of little
real benefit and result in too many arguments.” or
“I know my job! Why should I let others tell me how to
design and write my code?”
However, when done well inspections are very useful in improving the quality of design and code. Inspections have been recommended since the 70’s and the evidence weighs heavily in their favor.
Fagan M. E. (1976) Design and Code Inspections to Reduce Errors in Program Development. IBM Systems Journal, 15(3), 182-211
“In a group of 11 programs developed by the same group of people, the first 5 were developed without inspections. The remaining 6 were developed with inspections. After all the programs were released to production, the first 5 had an average of 4.5 errors per 100 lines of code. The 6 that had been inspected had an average of only 0.82 errors per 100 lines of code. Inspections cut the errors by over 80%.”
Freedman D. P., and Weinberg G. M. (1982) Software Inspections: An Effective Verification Process. IEEE Software, May 31-36
“In a software-maintenance organization, 55% of one-line maintenance changes were in error before code inspections were introduced. After inspections were introduced, only 2% of the changes were in error.”
Freedman D. P., and Weinberg G. M. (1982) Software Inspections: An Effective Verification Process. IEEE Software, May 31-36
“IBM’s 500,000 line Orbit project used 11 levels of inspections. It was delivered early and had only about 1% of the errors that would normally be expected.”
Gilb T. (1988) Principles of Software Engineering Management, pp. 205-226 and pp. 403-442 Wokingham: Addison Wesley
“The average defect detection rate is only 24% for unit testing, 35% for function testing, and 45% for integration testing. In contrast, the average effectiveness of design and code inspections is are 55 and 60% respectively.”
Jones C. L. (1985) A Process-Integrated Approach to Defect Prevention. IBM Systems Journal, 24(2), 150-167
“One client found that each downstream software error cost on average 5 hours. Others have found 9 hours (Thorn EMI, Reeve), 20 to 82 hours (IBM, Remus), and 30 hours (Shell) to fix downstream. This is compared to the cost of only one hour to find and fix using inspection.”
Gilb T., Graham D. (1993) Software Inspection. Addison Wesley
While the undisputed primary purpose of inspections is the detection of defects, when done well, inspections provide two very helpful secondary benefits:
Inspections are a means to disseminate development culture and experience. By examining the code of experienced, knowledgeable developers and having them walkthrough their code explaining the techniques they use, less experienced developers rapidly learn better coding practices.
2. Standards conformance
Once a developer knows that his or her code will not pass code inspection unless it conforms to the agreed design and coding standards, they are much more likely to conform.
“Even though coding standards can be written (presumably by experienced developers) and distributed, they will not be followed (or maybe not even read) without the sort of encouragement provided by inspections.” [McConnell]
Inspections have to be done in a way that removes the fear of
embarrassment or humiliation from the developer whose work is being
inspected. Few developers like to be told that something they have
sweated over for hours is wrong or could have been done better.
Setting the inspection culture is key. Everyone needs to see them
primarily as a great debugging tool and secondly as a great
opportunity to learn from each other. Developers also need to
understand that inspections are not a personal performance review.
Inspections complement the small team and chief programmer oriented structure of FDD beautifully. The mix of Feature Teams and Inspections adds a new dimension. A whole feature team is on the hot seat not just one individual. This removes much of the intensity and fear from the situation. The Chief Programmer controls the level of formality of each inspection depending on the complexity and impact of the features being developed. Where design and code has no impact outside the feature team, an inspection will usually only involve the feature team inspecting each other’s work. Where there is significant impact the Chief Programmer pulls in other Chief Programmers and developers to both verify the design and code and to communicate the impact of that design and code.
Checklists in working design sessions and reviews
It’s amazing how creating a simple checklist of items can improve
design and code inspections, walkthroughs and reviews. A list can start
as simple as something like:
- Transactions: scope, propagation, rollback scenarios
- Security: authentication, authorisation, auditing
- Persistence: format, lazy-loading
- Exception handling
- Event Handling
- Logging and tracing
- Caching: read or read/write, expiration, refresh
- Testing: positive tests, negative tests
I have also found that if the items on list are kept concise, it is
amazing how quickly the team starts to remember most of the items on
list, especially the ones that are truly useful to them. After a while
the actual physical list is barely needed.
Too many design review checklists are over-wordy because the authors cannot resist including explanations of why items are on the list. The list is a memory aid and an organising construct, not an instruction manual or pedagogical essay. One way to avoid this is to create a template for a traditional design specification document with instructions for what to put in each section, generate the table of contents, and use the table of contents as the starting point for the check-list. In many cases, you can throw the rest of the document template away.
We can make inspections even more useful by collecting various
metrics and using them to improve our processes and techniques. For
instance, as metrics on the type and number of defects found are
captured and examined, common problem areas will be revealed. Once
these problem areas are known, this can be fed back to the
developers and the development process can be tweaked to reduce the
problem. Tally marks against items in the check list is a simple,
fast way to collect these metrics.
What are you reviewing?
Both design and coding inspections have a very subtle gotcha, and while technology can remove the problem for code inspections, it is harder to do the same for design inspections. For example, it is software design review time. The selected reviewers have been handed a document that they have diligently read. They then meet to go over any comments … but what are they actually reviewing? Are they reviewing the design described in the document or are they reviewing the document itself?
With the availability of excellent source code formatting utilities
in most development environments, code inspections should no longer
need to nitpick about the layout of the source code. However, when it
comes to design reviews, most are still described in hand-written
documents. The authors of these documents are frequently expected to
comply with some design specification document standard or template.
There is nothing intrinsically wrong with having a standard or template. The problem comes when a team of reviewers spend more time pointing out non-compliance in things like the fonts, paragraph and heading styles used, trivial grammatical correctness, and the lack of introductory content than in the design being described.
Yes, a design document needs to communicate design adequately, but it is far more important to get the design right. Correctly formatting a design document to effectively communicate a poor or broken design is a waste of time. This is one of the reasons I prefer the use of work and design packages.
There are times when reviewing the document is required; when the document standards or template are set by regulators or by a formal contract. However, that is a document review not a design review. Beware confusing the two. A design review is far more important than a document review, even when the document review is truly needed.
Of course, a tool that generates a document from artifacts produced during the design process (whether that process be upfront, purely iterative or somewhere in between) alleviates much of the document formatting issues, in the same way as code formatters do for code inspections. Features like the model audits and documentation generation in Micro Focus Together point towards the possibilities.
At regular intervals we take all the source code for the
features that we have completed and the libraries and components on
which it depends and we build the complete system.
Some teams build weekly, others daily and others continuously. It really depends on the size of the project and the time it takes to build the system. If a system takes eight hours to build, a daily build is probably more than frequent enough.
A regular build helps highlight integration errors early. This is especially true if the tests built by the feature teams to test individual features can be grouped together and run against the completed build to smoke out any inconsistencies that have managed to find their way into the build.
A regular build also ensures that there is always an up to date system that can be demonstrated to the client even if that system only does a few simple tasks from a command line interface. Developing by feature, of course, also means those simple tasks are of discernible value to the client.
A regular build process can also be enhanced to:
- Generate documentation using tools like JavaSoft’s Javadoc or Together’s greatly enhanced documentation generation capability
- Run audit and metric scripts against the source code to highlight any potential problem areas and to check for standards compliance.
- Be used as a basis for building and running automated regression tests to verify existing functionality remains unchanged after adding new features. This can be invaluable for both the client members and the development team.
- Construct new build and release notes listing new features added, defects fixed, etc.
These results can then be automatically published on the project
team or organization’s intranet so that up to the minute
documentation is available to the whole team.
Configuration management systems vary from the simple to the
Theoretically, a FDD project only requires a CM system to identify the source code for all the features that have been completed to date and to maintain a history of changes to classes as feature teams enhance them.
Realistically a project’s demands on a CM system will depend on the nature and complexity of the software being produced. For example, whether multiple versions of the software need to be maintained, whether different modules are required for different platforms or different customer installations and so on. This is not explicitly related to the use of FDD; it is just business as usual on any sophisticated software development project where work is being done on different versions of a software system simultaneously.
It is a common fundamental mistake, however, to believe that only source code should be kept under version control. It is as important (maybe more important) to keep requirements documents, in whatever form they take, under version control so that a change history is maintained. This is especially true if the requirements form a legal commercial contract between two organizations.
Likewise analysis and design artifacts should be kept under version control so that it is easy to see why any changes were made to them.
Test cases, test harnesses and scripts and even test results should also be versioned controlled so that history can be reviewed.
Any artifact that is used and maintained during the development of the system is a candidate for version control. Even contract documents with clients of the system that document the legal agreement for what is being built are candidates for versioning. The version of the process you are using and any changes and adjustments that may be made during the construction and maintenance of the system may need to be versioned and variances documented and signed by project managers or chief programmers. This is especially true for systems that fall under regulation of such governmental bodies such as the FDA in the USA.
“Closely related to project control is the concept of
“visibility,” which refers to the ability to determine
a project’s true status. … If the project team
can’t answer such questions, it doesn’t have enough
visibility to control its project.”
Steve McConnell (1998) Software Project Survival Guide Microsoft Press
“The working software is a more accurate status report than any paper report could ever be.”
Steve McConnell (1998) Software Project Survival Guide Microsoft Press
It is far easier to steer a vehicle in the right direction if we can see precisely where we are and how fast we are moving. Knowing clearly where we are trying to go also helps enormously.
A similar situation exists for the managers and team leaders of a software project. Having an accurate picture of the current status of a project and knowing how quickly the development team is adding new functionality and the overall desired outcome provides team leads or managers with the information they need to steer a project correctly.
Feature Driven Development is particularly strong in this area. FDD provides a simple, low overhead method of collecting accurate and reliable status information and suggests a number of straightforward, intuitive report formats for reporting progress to all roles within and outside a project.
FDD blends a number of industry-recognized best practices into a
cohesive whole. The best practices used in FDD are:
- Domain Object Modeling - a thorough exploration and
explanation of the domain of the problem to be solved resulting in
a framework within which to add features
- Developing by Feature – driving and tracking development
through a functionally decomposed list of small, client-valued
- Individual Class Ownership – having a single person that
is responsible for the consistency, performance and conceptual
integrity of each class.
- Feature teams - doing design activities in small
dynamically formed teams so that multiple minds are always applied
to each design decision and multiple design options are always
evaluated before one is chosen.
- Inspections – applying the best-known defect detection
technique and levering the opportunities it provides to propagate
good practice, conventions and development culture.
- Regular builds – so that there is always a demonstrable
system available and to flush out any integration issues that
manage to get past the design and code inspections. Provides a
known baseline to which to add more function and against which a QA
team can test.
- Version Control – to identify the latest versions of
completed source code files and to provide historical tracking of
all information artifacts in the project.
- Progress Reporting – more specifically: frequent, appropriate, accurate progress reporting at all levels inside and outside the project based on completed work.
- [Beck 00] Beck, Extreme Programming Explained, Addison Wesley 2000
- [Beck 01] Beck, Planning Extreme Programming, Addison Wesley 2001
- Fagan M. E., Design and Code Inspections to Reduce Errors in Program Development, IBM Systems Journal, 15(3), 182-211 (1976)
- Freedman D. P., and Weinberg G. M. Software Inspections: An Effective Verification Process, IEEE Software, May 31-36 (1982)
- Gilb T. Principles of Software Engineering Management, pp. 205-226 and pp. 403-442 Wokingham: Addison Wesley (1988)
- Gilb T., Graham D. Software Inspection, Addison Wesley(1993)
- Jones C. L. A Process-Integrated Approach to Defect Prevention. IBM Systems Journal, 24(2), 150-167 (1985)
- McConnell S. Software Project Survival Guide, Microsoft Press (1998)
This is an updated version of the article was first published as CoadLetter #86 while I was an editor of that newsletter. That newsletter issue also formed the basis for chapter 3 of A Practical Guide to FDD