Feature Driven Development was introduced in Chapter 6 of the book, Java Modeling in Color with UML [Coad]. Mac Felsing and I elaborated on the topic in our book, A Practical Guide to Feature-Driven Development[Palmer]. Software development process is an emotional issue so here's a few key quotes from the chapter to keep in mind when reading this:
"For enterprise-component modeling to be successful, it must live and breathe within a larger context, a software development process."
"We think most process initiatives are silly. Well intentioned managers and teams get so wrapped up in executing process that they forget that they are being paid for results, not process execution"
"No amount of process over-specification will make up for bad people. Far better: Staff your project with good people, do whatever it takes to keep them happy, and use simple, well-bounded processes to guide them along the way."
For those not familiar with FDD, I'll try and summarize in a few pictures and paragraphs (please do refer to [Coad] for a more detailed introduction). Those who are familiar with FDD might want to skip to the comparison.
FDD is a model-driven, short-iteration process. It begins with establishing an overall model shape. Then it continues with a series of two-week "design by feature, build by feature" iterations. The features are small, "useful in the eyes of the client" results. FDD consists of five processes or activities.
1. Develop an Overall Model
For the first activity, domain and development members, under the guidance of an experienced component/object modeler (Chief Architect) work together. Domain members present an initial high-level, highlights-only walkthrough of the scope of the system and its context. The domain and development members produce a skeletal model, the very beginnings of that which is to follow. Then the domain members present more detailed walkthroughs. Each time, the domain and development members work in small sub-teams (with guidance from the Chief Architect); present sub-team results; merge the results into a common model (again with guidance from the Chief Architect), adjusting model shape along the way.
2. Build a Feature List
Using the knowledge gathered during the initial modeling, the team next constructs as comprehensive list of features as they can. A feature is a small piece of client-valued function expressed in the form: <action> the <result> <by|for|of|to> a(n) <object>; for example 'calculate the total of a sale'. Existing requirements documents, such as use cases or functional specs, are also used as input. Where they do not exist, the team notes features informally during the the first activity. Features are clustered into sets by related function and, for large systems, these feature sets are themselves grouped into major feature sets. Again working with domain experts, features are also prioritized and a minimum whole product identified - this is the minimum set of features that are needed for the system to be of value to the business.
3. Plan By Feature
The third activity is to sequence the feature sets or major feature sets (depending on the size of the system) into a high-level plan and assign them to chief programmers. Developers are also assigned to own particular classes identified in the overall object model.
4-5. Design By Feature / Build By Feature
Activities four and five are the development engine room. A chief programmer selects a small group of features to develop over the next 1-2 weeks and then executes the 'Design By Feature (DBF)' and 'Build By Feature (BBF)' activities. He identifies the classes likely to be involved, and the corresponding class owners become the feature team for this iteration. This feature team works out detailed sequence diagrams for the features. Then the class owners write class and method prologs. Before moving into the BBF activity, the team conducts a design inspection. In the BBF activity, the class owners add the actual code for their classes, unit test, integrate and hold a code inspection. Once the chief programmer is satisfied, the completed features are promoted to the main build. It is common for each chief programmer to be running 2-3 feature teams concurrently and for class owners to be members of 2-3 feature teams at any point in time.
Track by Feature
With FDD, we can track and report progress with surprising accuracy. We begin by assigning a percentage weighting to each step in an DBF/BBF iteration.
The chief programmers indicate when each step has been completed for each feature they are developing. Now we can easily see how much of a particular feature has been completed. Simply posting the list of features on a wall, color-coded green for 'complete', blue for 'in progress' and red for 'requiring attention' provides a good visual feel for overall progress with the ability to 'zoom in' to read the detail by simply walking closer to the wall.
Then use straight-forward tools to roll up these percentages to feature set and major feature set level to provide highly accurate, color-coded, progress reports for development leads, project managers, project sponsors and upper management.
Graph and trend over time to monitor progress rates.
Short Comparison with XP
Reading the introductions to FDD and XP reveals many similar factors driving the development of the two processes.
Traditional heavy processes with long ‘analysis phases’ are unworkable for projects running on internet time; business requirements are changing monthly if not weekly.
Software continues to be delivered late and over budget with less useful function than first envisioned.
Both FDD and XP are designed to enable teams to deliver results quicker without compromising quality. Both processes are highly iterative and results oriented. They are both people focused instead of document focused (no more thousand page specifications to write). Both dismantle the traditional separation of domain and business experts/analysts from designers and implementers; analysts are dragged out of their abstractions and put in the same room as developers and users. These new processes, together with new tools and techniques are enabling and encouraging analysis, design, code, test and deployment to be done concurrently.
So where do FDD and XP differ?
1. Team sizes
“XP is designed to work with projects that can be built by teams of two to ten programmers, that aren’t sharply constrained by the existing computing environment, and where a reasonable job of executing tests can be done in a fraction of a day.” [Beck]
FDD was first used with a team of 16-20 developers of varying abilities, cultural backgrounds and experience: four chief programmers (CP's), sixteen class owners split into User Interaction (UI), Problem Domain (PD) and Data Management (DM) teams. FDD is designed to scale to much larger team sizes. The limiting factor is the number of available CP’s. Chief Programmer teams have been proven in practice to scale well to much larger project teams (by the authors of FDD and independently [Brooks]).
2. Metaphor and Model
The XP process begins with the Business writing stories on index cards. A story is something the system needs to do. Development then estimates the time required to implement each story.
The whole project is guided by a system metaphor, "an overall story that everyone - customers, programmers and managers - can tell about how the system works". [Beck]
The Business selects the subset of stories that will form the next release and Development makes a delivery commitment. Development splits each of the stories into a number of tasks. Each developer accepts responsibility for a set of tasks.
Replace stories with domain walkthroughs and tasks with features and it sounds very similar to the first three activities in FDD.
The enormous difference between XP and FDD is FDD's additional development of an overall domain object model. As developers learn of requirements they start forming mental images of the system, making assumptions and estimating on that basis. Developing an overall domain object model forces those assumptions out into the open, misunderstandings are resolved and a more complete, common understanding is formed.
XP uses the analogy of driving a car - driving requires continual little course adjustments, you cannot simply point the car in the right direction and press the accelerator. A domain object model is the map to guide the journey; it can prevent you from driving around in endless circles. The domain object model provides an overall shape to which to add function, feature by feature.
The domain object model enables feature teams to produce better designs for each group of features. This reduces the amount of times a team has to refactor their classes to add a new feature. Reducing the time spent refactoring increases the time that can be spent adding new features.
3. Collective Ownership or Class Ownership
XP promotes collective ownership of code; any developer can add to or alter any piece of source code as they discover the need. But collective ownership usually degenerates into non-ownership as the number of people involved grows. Small communes often work, larger communes rarely work for any length of time. XP claims three benefits from collective code ownership:
- We avoid waiting for someone to make a change we need in their code.
- Overly complex code is eliminated because anyone who finds such code will try to simplify it. Knowing this, developers are less likely to add complexity that they cannot justify.
- Collective ownership spreads knowledge of a system throughout the team reducing risk if a critical team member leaves.
Feature teams also solve these problems, while keeping the well established benefits of individual code ownership:
- By definition, all the owners of classes needing updates for the development of a particular feature are members of the feature team. In other words, the feature team owns all the code that needs changing for a particular feature. This minimizes the waiting for someone else to modify their code.
- All low-level design in FDD is done within feature teams (Design By Feature). The irritating 'development by surprise' problem where a developer delivers code that is different from agreed design is caught at code inspection by the feature team and rejected. Overly complex code is caught in the same way, before it enters the system.
- Although class owners work only on the classes they own, owners of closely associated classes frequently work in the same feature team. They get to know those closely associated classes. Knowledge is clustered rather than randomly scattered.
XP also assumes that short integration and testing cycles means a low rate of collisions from developers updating the same piece of source code. For larger numbers of developers and systems this is obviously less and less likely to be true.
4. Inspections and Pair Programming
Design and code inspections, when done well, are proven to remove more defects than testing. Secondary benefits include:
- education; developers learn techniques from each other
- coding standard enforcement: conformance is checked
XP uses pair programming to provide a continuous level of design and code inspection. All low-level design and coding is done in pairs. This is obviously better than individual developers delivering code without any form of inspection.
FDD promotes more formal inspections by feature teams; the level of formality is left to the chief programmer's discretion. This takes more time, but it has added advantages over pair-programming:
- fresh eyes to look at the code, catching bad assumptions made by the coder/s
- a chief programmer present to ensure the techniques learnt are good techniques. Yes, developers can just as easily teach each other bad habits as well as good habits.
- a change of pace for developers - an hour or so away from the terminal (assuming the common practice of printing source code for inspection).
There is no reason why members of feature teams cannot pair up during coding when this is desirable. It is not unusual to see two members of a feature team working together where care is needed. One of the great things about feature teams is that a feature is complete only when the team is finished not when any one individual is finished; it is in the team members' own interests to help each other.
Correctness in XP is defined by the running of unit and functional tests. FDD takes unit testing almost for granted as part of Build By Feature. FDD does not define the mechanisms or level of formality for unit testing; it leaves that to the chief programmer to do what is appropriate.
It is acceptable to use XP unit testing techniques in an FDD environment. Where continuous or regular system builds are performed, it certainly makes sense to have a growing set of tests that can be run against a new build. Again FDD does not specify this because technology and resources differ so much between projects. In some circumstances it is very difficult to produce a set of completely isolated, independent tests that run in a reasonable amount of time.
XP leaves tracking to the project managers, encouraging them to minimize the overhead of collecting data and use large visible wall charts. In contrast, Tracking By Feature in FDD describes a low-overhead, highly accurate means of measuring progress and provides the data to construct a large variety of practical, useful, progress charts and graphs.
It is important to discover what works for you and your organization. The name of the process you use is not important. What is important is the ability to repeatedly deliver frequent, tangible, working results on time, within budget and with agreed function.
Kent Beck acknowledges, among others, the contributions of Ward
Cunningham, Ron Jefferies, Martin Fowler, Erich Gamma and Doug Beck
in the development of XP. [Beck]
The main minds behind FDD are Jeff De Luca and Peter Coad with contributions from M.A. Rajashima, Lim Bak Wee, Paul Szego, Jon Kern and Stephen Palmer [Coad].
[Brooks] Brooks, Frederick P. Jr., The Mythical Man Month: Essays on Software
Engineering. Anniversary Edition. Addison Wesley, 1995.
[Beck] Kent Beck, Extreme Programming Explained, Addison Wesley
[Coad] Coad, De Luca, Lefebrve, Java Modeling in Color with UML,Prentice Hall 1999
This article was first published as CoadLetter #70