"Agile software development is about iteration not oscillation"
Jim Highsmith (paraphrased), Agile 2009


"Perhaps the worst software technology of all time was the use of physical lines of code [for metrics]. Continued use of this approach, in the author’s opinion, should be considered professional malpractice."
Capers Jones, Applied Software Measurement


"Information Technology is 80% psychology and 20% technology "
Jeff De Luca, www.nebulon.com


"Kanban is the science of not trying to do too much at once"
Stephen Palmer, 2012


"Speech is conveniently located midway between thought and action, where it often substitutes for both."
John Andrew Holmes


"I guess that as you go along you try to fill your tool-box, so that when you face these circumstances you have more options to choose from"
Jonny Wilkinson, Tom Fordyce's Blog, 2009


"We apply the analytic procedure in order to create a conceptual framework within which we can draw conclusions about the means by which a system solves its tasks. Indeed, we do this with the express purpose of establishing a solid foundation from which we can carry out a subsequent synthesis. This synthesis, in turn, acts to verify the conceptual model as an explanation. The process is an iterative one."
Tom Ritchey, Refactoring, 1991,


"If you can't explain it simply, you don't understand it well enough."
Albert Einstein


"Use models to find out how things work or to find solutions to puzzling dilemmas. Create models to communicate ideas and understand things you can’t see. Recognize models and the countless ways models are used for working, playing, teaching and explaining. Assess models for what they do and don't tell you about the real thing and how useful they are"
Boston Science Museum


"chaos often results when a project's complexity is greater than its managers' ability to direct meaningful progress toward a goal."
Ken Schwaber, Agile Project Management with Scrum, 2004


"As the number of people increases, the ways they can interact tend to multiply faster than you can control them."
Gerald M. Weinberg, Quality Software Management: Volume 1 Systems Thinking, 1992


"Human brain capacity is more or less fixed, but software complexity grows at least as fast as the square of the size of the program."
Gerald M. Weinberg, Quality Software Management: Volume 1 Systems Thinking, 1992


"The complexity of software is an essential property, not an accidental one. ... Many of the classical problems of developing software products derive from this essential complexity and its nonlinear increases with size."
Frederick P. Brooks, The Mythical Man-Month, 1995


"When people are factored in, nothing is simple. The complexity of individuals and individuals working in teams raises the noise level for all projects."
Ken Schwaber, Mike Beedle, Agile Software Development with Scrum, 2002


"It usually takes more than three weeks to prepare a good impromptu speech."
Mark Twain


"Simplicity is the final achievement. After one has played a vast quantity of notes and more notes, it is simplicity that emerges as the crowning reward of art."
Frederic Chopin


"Man will occasionally stumble over the truth, but most times he will pick himself up and carry on."
Winston Churchill


"The structure of a software system will reflect the communication structure of the team that built it."
R. E. Fairley


"We try to solve the problem by rushing through the design process so that enough time is left at the end of the project to uncover the errors that were made because we rushed through the design process"
Glenford Myers (via jJeff De Luca)


"Deliver frequent, tangible, working results"
Peter Coad


Cwality is one of the four 'c's of software development management along with complexity, communication and change. But what is high quality software and how doe we ensure we are creating it.

What do we actually mean when we talk about software as being of high quality. How do we ensure we develop high quality software? The following exercise can help answer these two questions. Work through it yourself, with your software development teams, and with the stakeholders on your projects.

A High Quality Exercise

Step 1: Spend a few minutes listing the characteristics that make a piece of software ‘high quality’. After doing this you should have a collection of terms or phrases like correct, fast, reliable, modular, standards-compliant, easy to use, etc.

Step 2: List the different ways you can think of checking or measuring each of these quality attributes. Techniques that you will want to consider include:

Step 3: Based on these results discuss how the team can change the ways they work together to improve the quality of the software they produce. Are there any important quality attributes that the team is not checking in some way? Are the checks being done at the best points in the project and at the best frequency? Are there quality assurance activities currently being performed that are not contributing to the improvement of the characteristics identified in step 1? If so, are they actually adding value? Challenge yourselves to justify the time and cost of continuing to do those.

Tips for doing the exercise

When asked to do this exercise, people frequently offer three immediate answers:

  1. high quality software is software that fulfills a set of requirements
  2. it depends on who you ask
  3. it depends on the context

The first answer is obviously correct but needs elaboration. It is not just a matter of meeting any set of requirements. It is a question of whose requirements should be met [Weinberg]. The customers paying for the software may have requirements that differ slightly from those of the actual users of the software. In addition, system operators, system administrators, developers maintaining the current release of the software, and developers preparing the next release are likely to have very different requirements from the users of the software.

Therefore, we have to concede that the second answer above is correct too. We need to address the question in step 1 of the exercise from the perspective of the different roles and stakeholders in the project. One simple way of thinking about this is to split quality characteristics into two categories: external and internal [Meyer].

External Quality

Ask the users of a software product whether they think it is of high quality and they will answer based on externally observable attributes of the software. It is likely to be considered of low quality if it hangs or crashes too often, is too slow in producing results, or sometimes produces the wrong results, if it does not work well with other pieces of software, or if it has an overcomplicated or tedious user interface. In contrast, if a piece of software does what it should reasonably quickly under normal circumstances, handles abnormal circumstances gracefully, works well with other software components, and is relatively easy to use then it is likely to be considered high quality.

The relative importance of these various factors obviously differs for different types of software but it can also vary for different kinds of user. For example, occasional users may value ease of use over performance but everyday users may value performance over ease of use. A particular functional defect might be of no importance to users doing one kind of work but critical to users doing another.

Internal Quality

While those responsible for maintaining and further developing the software are just as concerned about externally visible quality factors, they are also interested in internal quality factors such as modularity and extensibility. Are the design and the source code easily understood, for example? Do they comply with accepted standards, patterns, and best practices?

Over time most developers come to realise that internal quality characteristics are essential to maintaining external quality characteristics. Initially a piece of software with low internal quality may exhibit high external quality. However, unless the internal quality is improved, the external quality eventually suffers as developers continue to fix bugs and add new features. This is where refactoring becomes truly useful. Refactoring is about making improvements to the internal structure of a piece of code without changing its external behavior [Fowler]. Of course, if we can start with high internal quality then the amount of significant refactoring needed is likely to be reduced.

In incremental or iterative development, maintaining internal quality becomes even more important. In fact, to be able to maintain a high level of functional correctness and performance of a software product throughout an aggressive release schedule demands an equal emphasis on maintaining the internal quality and conceptual integrity of that software. To quote a very cringey old television advert from Singapore, "Wellness comes from within".

Just as with external quality, the relative importance of the different internal quality factors of a piece of software depends on the type of software involved.

Quality Spectrum

As Mac Felsing pointed out in, A Practical Guide to Feature-Driven Development, splitting quality characteristics into two categories is a simplified view of reality. Actually, the internal and external categories are two extremes in a broad spectrum of perspectives on quality. Nevertheless, introducing strategies to improve the two extremes of internal and external quality will go a long way to satisfying many of the viewpoints in between.

Quality context

What about the context? The context within which the software is being developed will certainly impact whether the software is considered high quality or not. An occasional software crash in a stand alone personal computer application may be annoying but a software crash in an industrial control system could kill someone. In both cases, robustness is still a desirable quality characteristic for the software. What differs is the level of robustness in the two different contexts. Therefore, while the context may determine the relative importance of a particular characteristic, the quality characteristics and attributes remain similar.

Example results from the exercise

The following table summarizes the results from running this exercise with a number of different software teams:

External Quality Attributes
Internal Quality Attributes
Quality
Characteristic
Automated
Testing
Manual
Testing
Static Analysis
Tools
Review
Quality
Characteristic
Automated
Testing
Manual
Testing
Static Analysis
Tools
Review










Genuinely Useful

greenlight.png

Code is easy to understand


supporting technique greenlight.png
Easy to Use

greenlight.png
supporting technique Extensible


supporting technique greenlight.png










Correct
greenlight.png greenlight.png greenlight.png greenlight.png Modular


supporting technique greenlight.png
Robust
greenlight.png greenlight.png greenlight.png greenlight.png Loosely Coupled


supporting technique greenlight.png
Fast
greenlight.png greenlight.png supporting technique
supporting technique Highly Cohesive


supporting technique greenlight.png
Efficient
greenlight.png greenlight.png supporting technique supporting technique Compliant


greenlight.png greenlight.png
Compatible
greenlight.png greenlight.png supporting technique supporting technique Consistent



greenlight.png
Adaptable
greenlight.png greenlight.png greenlight.png greenlight.png Reusable


supporting technique greenlight.png
Secure
greenlight.png greenlight.png supporting technique greenlight.png Portable


supporting technique greenlight.png
Consistent (UI)

greenlight.png
greenlight.png Documented


greenlight.png greenlight.png

Table 1: example results from quality attribute exercise

Key:

greenlight.png determining technique: this way of measuring quality can determine if the level of quality for this characteristic is appropriate
supporting technique supporting technique: this way of measuring quality can indicate potential problem areas for this quality characteristic

In more detail:

Genuinely Useful

High quality software does something that someone finds useful or entertaining. It also does that something in ways that do not devalue that usefulness or entertainment. For example, software that calculates a quantity faster than someone can do so by hand is useful, but that usefulness is likely to be totally devalued if the software only calculates the right answer 90% of the time. If this is known to be the case, the user ends up calculating the answer manually to ensure the computer had got it right, completely undermining the value of the software's usefulness.

Only actual use of the software can truly determine if a piece of software is genuinely useful to its users and those paying for it. While other techniques may verify and validate certain characteristics, ultimately only manual acceptance by users (User Acceptance Test) and customers (Customer Acceptance Test) decides if software is genuinely useful.

Easy to Use

High quality software is a delight to use. It is easy to learn, easy to remember how to use, and as simple as possible to use without compromising its genuine usefulness. High quality software does not require seven clicks of a mouse to perform a frequent task when it could be done in two clicks. Together, genuine usefulness and ease of use are the two overriding external quality characteristics. All the other external quality characteristics contribute to achieving the required levels for these two attributes.

In most cases it is very difficult if not practically impossible to create automated tests and static analysis tools that indicate that a piece of software is easy to use. In some cases, it might be possible to create automated tests and static analysis tools to highlight a certain part of the software that is likely to prove difficult to learn and use. You may also be able to spot potential usability problems or poor practice during reviews of requirements, design, or code. Nevertheless, only manual testing can determine if a piece of software is truly easy to use.

Correct

Correct software produces the expected results under normal operating conditions. Automated testing, manual testing, static analysis, and reviews can all contribute effectively to the discovery of logical errors in the software. Currently, most teams still rely on reviews, walk-throughs, or inspections to spot logical errors in requirements and designs; tooling in this area not yet being in widespread use.

Robust

Robust software handles abnormal operating conditions appropriately. Automated testing, manual testing, static analysis, and reviews can all contribute effectively to identify poor error and abnormal situation handling by the software. In addition, reviews, walk-throughs, or inspections can spot error-handling omissions in requirements and designs.

Fast

Fast software produces the required results quickly enough. While static analysis and reviews or inspections may spot potential performance problems, only testing can determine if a piece of software executes quickly enough.

Efficient

Efficient software produces the required results quickly enough without overloading the computer or network of computers that it is running on. Again static analysis and reviews, walk-throughs, or inspections may identify points of inefficient resource use, but only testing can truly determine if a piece of software will run without consuming too many computing resources.

Compatible

Compatible software plays well with other software or hardware. Static analysis and reviews, walk-throughs, or inspections may help a little in identifying specific instances of incompatible software constructs but only testing can truly determine if software is compatible with hardware and other software.

Adaptable

Adaptable software is easy to use in situations that were not originally envisaged by it's designer. All of the techniques considered above can be used to determine the level of adaptability of a piece of software.

Consistent(UI)

A consistent user interface enables users to achieve similar tasks in similar ways. Usually this requires manual testing supported by some form of inspection, walk-through or review of the user interface design and code.

Code is Easy to Understand

High quality software code is easy for developers to understand. Only some form of inspection, walk-through or review of the design and source code can really decide this. Static analysis tools can identify areas of code that may prove hard to understand.

Extensible

Extensible software is easy to enhance and modify. Easy to understand and easy to extend are the overriding internal quality characteristics. All the other internal characteristics contribute and support these two. Nevertheless, while static analysis tools might be able to highlight areas of code that may prove hard to extend, only inspections, walk-throughs and reviews can determine if code is easy to enhance and modify.

Modular

Modular software is organized into relatively independent and autonomous sections. Static analysis tools can measure the extent of modularity but only inspection, walk-throughs and reviews can decide whether or not the level of modularity is appropriate.

Loosely Coupled

Coupling is a measure of how interdependent units within a piece of software are. A unit could be a single class in an object-oriented programming language, a library of functions in a more procedural or functional programming language, or a single component or module in whatever form they take. The higher the number of other units that each unit uses and is therefore dependent upon, the tighter the coupling in the software. Static analysis tools can measure the level of coupling for different types of unit but only inspection, walk-throughs and reviews can determine whether that is appropriate or not for this particular piece of software.

Highly Cohesive

Cohesion in software is a measure of how related the contents of a unit of source code are. The more related the concepts within a unit are, the higher the cohesion of that unit. Static analysis tools can measure the level of cohesion but only inspection, walk-throughs and reviews can determine whether that is appropriate or not for the units comprising the software.

Compliant

Compliant software follows agreed standards and conventions. Static analysis tools can do a very good job of enforcing coding standards and conventions. Inspections, walk-throughs and reviews can be used instead but are a much more expensive way of doing so. It is better to reserve code inspections, walk-throughs and reviews for items that cannot be checked by static analysis tools. However, inspections, walk-throughs and reviews are typically the only technique available for checking compliance to requirements and higher-level design standards and conventions.

Consistent

Consistent software performs similar tasks in similar ways. This normally requires some form of inspection, walk-through or review to verify. Static analysis tools can spot large blocks of duplicate or similar looking code but repeated cutting, pasting and tweaking significant amounts of code is not the correct way to go about achieving consistency.

Reusable

Reusable software is designed to be used again and again. While static analysis tools might be able to determine how much certain sections of code are reused, only some form of inspection, walk-through or review to verify can make the judgment call required to decide if a piece of design or code is reusable enough or not.

Portable

Portable software requires little or no changes to run on different platforms. Static analysis tools can identify specific uses of non-portable code constructs. Reviews, walk-throughs and inspections can determine if code is portable enough or not.

Well-documented

Well documented software has appropriate comments within the code and an appropriate level of supporting external documentation. Static analysis tools can count the amount of comments within code, indicate if documentation comment tags (e.g. Javadoc tags) are missing, and calculate comment line to source code line ratios. Nevertheless, only an inspection, walk-through or review can determine if comments are meaningful, easy to understand, comprehensive, and truly contribute to the understanding of the code. Similarly, only inspections, walk-throughs or reviews can determine if supporting design and user documentation is sufficient for the context in which the software is being developed and used.

Timing and Frequency

Study after study over the last thirty years has shown that it is far easier and much more cost effective to fix a problem close to the point in time that it was introduced rather than at some point significantly further into the future.

Some of the reasons for this are obvious:

  • The later a problem is identified then the higher the likelihood that work based on it has been added. That work will need to be rechecked and maybe redone.
  • The later a problem is identified, the more groups of people involved and the more administrative process required to have it fixed. For example, a developer spotting a defect when running his own unit tests on a new piece of his own work requires little in the way of administrative process to fix that defect. In contrast, a defect in a shipped product discovered and reported by a customer usually requires the involvement of the technical support, development, testing, project management, release, and documentation teams.
  • The longer the distance between introducing a defect and identifying it, the higher the level of frustration, irritation and resistance of developers to go back to designs and source code they thought was finished long ago and rework it.

The following are some proven strategies for the early detection of problems in internal and external quality.

Strategy 1: Use a highly iterative process.

Shorten the duration of each of the analysis, design, implement, test cycles by using a highly iterative process. A traditional waterfall process that does all the analysis first, followed by all the design, then all the coding, and then all the testing, obviously has the longest possible 'distance' between the end of an analysis, design and implementation activity and the start of the testing and review activities where analysis, design and coding defects may be identified. A highly iterative development process breaks down the deliverables of a software project into small pieces and applies the development process to each of those small pieces. Therefore, using a highly iterative process means that the analysis, design and coding of each piece reaches testing and review much quicker.

A few other points are worth noting in this respect:

  1. Analysis, design, implementation, and testing activities are never done in a pure sequence. However, as my old economics lecturer used to say 'lets assume the curve on the graph is really a straight line to make the math easier; it's the economic principle at this point that is important not the details of the math'. So, for the purposes of this discussion, I am making an analogous simplification and ignoring iteration within and between analysis, design, implementation, and testing, assuming that they are roughly performed or, at least, completed in a sequence.
  2. The longer the possible 'distance' between introducing a defect and identifying it, the more formal the process needed for bug reporting, tracking, fiing and re-testing because more people and longer intervals of time are involved. It follows therefore that we can realistically expect highly incremental and iterative processes to require less formality than their waterfall-style cousins precisely because they significantly reduce this 'distance'.
  3. All modern, self-styled agile processes including Scrum, Kanban and Feature-Driven Development(FDD) use short iterations, with Scrum teams typically using fixed length iterations of two to four weeks, and Kanban and FDD preferring even shorter, variable-length iterations.

Strategy 2: Use collaborative analysis and design sessions

Design is all about examining the trade-offs between various alternatives and picking the one that best solves the problem under consideration. Picking an inappropriate solution can lead to considerable amounts of rework and refactoring later on. Good designs earlier means less refactoring, less rework, and high internal quality earlier, making it easier to achieve high external quality earlier.

Human beings, even the best of us, are fallible and have off days. However, in many software development organizations individuals are expected to make significant design decisions almost every day. Sometimes mistakes are picked up at design reviews but surprisingly few organizations practice these. Even when the mistake is caught in a review it often means a significant amount of time has already been lost. An alternative approach is to use collaborative design sessions where design is done in small teams around flip charts or white boards such as in FDD's Design By Feature process. More minds applied means more ideas considered, more alternatives eamined, more chance of a truly elegant solution, and less chance of significant design mistakes.

However, for collaborative design sessions to be more productive than individuals working separately requires discipline and management. Facilitating team design sessions and knowing when to work together and when to work separately is a highly valuable skill in a development team lead or chief programmer. CoadLetter #40 Lessons learnt from Fred contains some very useful tips and techniques for working well in small groups.

Strategy 3: Use good design and code inspections/reviews/walk-throughs

In addition to often being the only technique available for determining the level of certain quality attributes, peer reviews, walk-throughs and inspections can significantly shorten the distance between the introduction of defects and their detection. When done well, inspections have been shown to find more defects than testing and also find different defects than testing.

The qualifying statement, when done well, is important. Done badly, inspections and reviews rapidly become argumentative, demoralising, intimidating, and soul-destroying wastes of time. It is worth, at the very least, reading up on how to run inspections and reviews well, before attempting to introduce them into a team's development culture.

Strategy 4: Use agreed compiler warnings, automated source code audits, and metrics

Static analysis tools like compiler warnings, and automated audits and metrics can be very helpful in identifying potential problems in source code, and in spotting non-compliance with coding standards and idioms. However, they have to be targeted or they quickly become worthless. Agree as a team or organization on the appropriate settings for compiler warnings and automated audits and metrics. Otherwise the results of running them are as likely to confuse, frustrate, and waste time, as they are to help. Reporting hundreds of problems in source code that half the developers do not consider to be problems is not constructive, will not get fixed, and may obscure reports of genuine problems.

Developers are informed of potential problems whenever they compile or run the audits and metrics frequntly shortening the distance between the introduction of problems and their detection. Clearing or justifying agreed compiler warnings, audits and metrics, should form part of the entry criteria for code inspections and reviews, and source control check-ins. Running the same audits and metrics in a regular build provides an additional level of enforcement of those entry criteria.

One additional point: Demanding and automatically checking code layout standards that are not supported by the code formatting features of the developers' editors is another example in futility and excessive idealism. Demanding developers layout code in a way that cannot be done by a single mouse-click to invoke a source code formatting tool, or will be broken by invoking the code formatter, is a ridiculous idea. For organizations that still require them, it is a shame that none of the code formatters I am aware of, are capable of generating documentation that can be published as part of a team's coding standards.

Strategy 5: Create and Run Automated Tests

Writing and regularly executing automated test code or scripts is an obvious way of locating defects early. If a change introduces a defect that is caught by running one of the automated tests, the problem can be solved before the change is committed to the team's source control system. Similarly, automated integration tests can pick up problems during a nightly or weekly regular build, long before the defect finds its way into a formal customer acceptance or system testing activity.

Automated tests may be written manually using a framework like JUnit or TestNG, or may be generated using a purpose- built testing tool.

All the strategies above shorten the distance between the introduction and detection of defects by building short feedback loops into the development process. In many ways this is analogous to the use of closed control loops in industrial process control.

Additional Quality Assurance Strategies

Here are a couple more quality assurance strategies:

Strategy 6: Apply analysis, design and implementation patterns

Apply analysis, design and implementation patterns to reuse proven solutions in analysis, design and implementation. This strategy probably needs little explanation these days. Many developers are aware of and recognize the value of analysis, design and coding patterns. Using proven building blocks reduces the likelihood of poor designs being chosen.

As with compiler warnings and automated audits and metrics, it is important to agree as a team and organization on the patterns and variations of patterns you are going to use. Without this agreement patterns lose much of their value because any so-called pattern can be applied whether it is appropriate or not. Too many people have taken the idea of design patterns too far, resulting in many constructs of dubious value being called design patterns.

The application of some common design patterns can be found under the refactoring headings of programmers' editors and IDE's. The designers of the old Borland Together recognized the value of patterns very early on included a number of configurable wizards that generated the skeleton code (and therefore the UML class diagrams in Together) for a number of useful analysis, design and coding patterns. Unfortunately, later product managers of Together dropped much of this major feature set instead of developing it further.

Strategy 7: Communicate design clearly at all levels of abstraction

Communicate design clearly at all levels of abstraction using the most appropriate means of communication: text, lists, tables and/or pictures.

Miscommunication and misunderstanding are behind many significant defects in the analysis and design of software components and systems. Reducing these problems can make a significant improvement in a systems internal and external quality.

They say a picture is worth a thousand words but sometimes a simple list or a few lines of source code communicate far better than any number of pictures. A good software development team minimizes communication disconnects and misunderstandings by using the most appropriate means available to communicate with the different roles and personalities within a development team and with the other stakeholders in a project.

Again some tools can help reduce the time and effort required to do this. These include tools that parse source code to help build UML-style class and interaction diagrams at various levels of detail, and tools that generate useful, up-to-date API reference documents and web pages.

Conclusion

Producing high quality software requires thought, discipline, and the ability to adapt and improve processes. It includes:

  • clearly understanding the characteristics of high quality software
  • identifying and applying appropriate quality assurance techniques throughout the software development process
  • building appropriate feedback loops into the process to reduce the time between introduction and detection of defects
  • selection and appropriate configuration and use of developer tooling
  • a desire to continually raise the bar, to improve internal quality to make it easier to maintain and improve external quality

The costs of poor quality are tangible; they cost you customers and money, and ultimately affect the success of your business. Quality is not an optional extra in any part of what you do. If a customer/client experiences a lack of quality in one area they are likely to jump to the conclusion that issues they are experiencing in another area are due to your lack of quality rather than their own mistakes.

The result of these bad assumptions is increased calls to technical support, more time spent in technical support investigations and escalations, delays to projects and general loss of confidence in your software.

It is far more cost effective to ensure quality is high in all areas of what you are delivering.

The unavoidable price of reliability is simplicity.
C. A. R. Hoare

The Unit In Unit Testing

The changing definition of unit in unit testing and what that means for test-writing developers.
Read more...

False Security and Over Confidence from Automated Testing

Automated tests are a good thing unless they are lulling you into a false sense of security. In fact, only good automated tests are a good thing. Bad automated tests are harmful to a project's health.
Read more...

Inspections and Reviews

With agile processes and the latest development tools, has more modern software development practice outgrown the need for formal inspections?
Read more...

Follow me on Twitter...