Play the Quality Game

Looking for something different to do with your team in a retrospective or as an ice-breaker or warm-up exercise at the beginning of a planning, story writing or other team activity? Running through this simple quality game can kick start some good process improvement discussions.

How to Play

  1. As a whole team or in groups of three or four, spend five minutes listing the characteristics that make a piece of software ‘high quality’ in the eyes of its users.In other words, look for words or short phrases to complete the sentence, “High quality software is ….”.
    For example: fast, robust, easy to use, …
  2. Then do the same again but this time list the characteristics that make source code ‘high quality’ in the eyes of developers.In other words, look for words to complete the sentence, “High quality source code is ….”.
    For example: extensible, portable, easy to understand, …
  3. Next spend a couple minutes listing different QA methods and techniques available to the team. Group them under the headings of automated testing, manual testing, static analysis, and peer reviews.
  4. Now spend five to ten minutes identifying which of the four categories of QA methods and techniques from step 3 are useful for checking or measuring each of the quality attributes identified in steps 1 and 2.
  5. Finally, spend a few minutes reviewing the results as a whole team. Consider how you can improve the way the team works to increase the quality of your software and source code. In addition, identify any existing quality assurance activities that don’t contribute effectively to checking any of the quality attributes that you have identified and justify continuing to do those activities.

How to Cheat

Occasionally a team might decide to state that high quality software is ‘software that meets the requirements’  or high quality source code is ‘source code that complies with coding standards’ and stop there. Combat this by requesting them to list some of the characteristics that should be covered by requirements and some of the reasons why coding standards exist and are important.

As the facilitator of the quality game you can use the following ‘cheat sheet’ to prompt and suggest additional ideas to help teams:

(the cheat sheet is also available as a TheQualityGameCheatSheet)

Step 1: Characteristics of High Quality Software

  • Correct – does the software do what it says on the tin under normal circumstances?
  • Reliable – does the software work correctly every time?
  • Robust – can the software handle abnormal conditions gracefully and appropriately?
  • Consistent – are similar tasks done in similar ways; both from a user interface perspective and from a design and implementation prespective?
  • Fast – does the software do what it says on the tin quickly enough?
  • Efficient – does the software avoid consuming too many computing resources e.g. processor, RAM, disk i/o, network i/o, etc.? Can other software run at the same time on an average end-users machine?
  • Secure – a subset of ‘correct’ but important and specialized enough to be worth a separate mention
  • Simple – Is there any unnecessary complexity, restrictions, or over-complication in the user interface? Can necessary complexity be hidden by better abstractions?

I consider the characteristics above can be summed up as software that is genuinely useful or entertaining and a delight to use. For example, if the software is not truly beneficial to someone then it probably is not doing what it ought to be doing, not doing it fast enough, or crashing too often. Can the intended users of the software easily learn and remember how use it? Can they do frequently needed tasks in a small number of steps? Do frequent error situations mean a lot of extra work for users? And so on.

Step 2: Characteristics of High Quality Source Code

  • Modular – is the software organised into logical chunks rather than a single monolithic heap of spaghetti code?
  • Loosely coupled – are the number of dependencies between modules kept reasonably low?
  • Highly Cohesive – does each module provide a small number of highly-related features?
  • Standards Compliant – does the code comply with agreed design and coding standards?
  • Simple  – Is there any unnecessary complexity, cleverness, or over-complication in the code? Can necessary complexity be hidden behind simpler interfaces and facades or encapsulated within better abstractions?
  • Reusable – are the features provided by key modules used by all the other modules needing that functionality?
  • Extensible – is it easy to add new features to the software?
  • Well-documented – does the code have useful comments? Is there enough additional functional, design, and user documentation?
  • Compatible – does the software play well with other standards-compliant software?
  • Adaptable – can the software be used in different situations easily?
  • Portable – can the software be easily run in different environments?

To me, these characteristics can be summed up as source code that is easy to understand and modify. Martin Fowler says in his book, Refactoring, that “any idiot can write code a computer can understand, good programmers write code that humans can understand.” And what cannot be communicated in the code should be clear from readily available documentation.

Step 3: Kinds of QA activity and technique

Most software development quality assurance techniques fall into one of four categories:

  1. Manual testing (MT) – includes adhoc testing, exploratory testing, informal bug hunts, user interface walkthroughs, in addition to formal execution of manual test cases.   

  2. Automated testing (AT) – the execution of test scripts using tools like the xunit family of tools, Spock, Geb, Selenium, Jasmine, Protractor, etc.

  3. Static analysis (SA) – includes source code and compiled code analysis tools like compiler warning levels, SonarQubePMDCheckstyleFindBugs, etc,.

  4. Peer review and inspections (PR) – visual inspection of requirements, plans, design and code, etc. by other members of the team or experts from outside the team.

Step 4: Checking Each quality Attribute

  • Genuinely Useful or Entertaining:  MT
    There is no automated way to tell if a piece of software is genuinely useful or entertaining. We hope that the software will prove useful if it functions as we understand to be required, and the specified requirements truly reflect the needs of the end-users.
  • A Delight to Use:  MT
    There is no automated way to tell if a piece of software is actually delightful or even easy to use.
  • Correct:  AT, MT, SA, PR
    All four methods can contribute in different ways to improving correctness. Various studies of the last four decades have repeatedly shown that static analysis and peer review find different defects than testing does.
  • Reliable:  AT, MT, SA, PR
    All four methods can contribute in different ways to improving reliability
  • Robust:  AT, MT, SA, PR
    All four methods can contribute in different ways to improving robustness
  • Consistent:  MT, PR
    Currently, there is no automated way to tell you if a piece of software does things consistently.
  • Fast:  AT, MT
    SA and PR cannot measure actual performance
  • Efficient:  AT, MT, SA, PR
    SA and PR cannot measure actual efficiency but can identify some typically inefficient coding constructs
  • Secure:  AT, MT, SA, PR
    There are certain SA tools designed to check for specific coding constructs that may cause security problems.
  • Simple:  MT, SA, PR
    Only MT can decide if a user interface is simple to use or not. SA can provide measures of code complexity but only PR can decide if it is appropriate or not.
  • Easy to Understand:  SA, PR
    SA tools can identify specific contributing issues but only actual peer review of the code can determine if it is easy to understand.
  • Modular:  PR
    You cannot test or automatically analyse for modularity.
  • Loosely Coupled:  SA, PR
    SA can measure coupling but only PR can decide if it is appropriate or not.
  • Highly Cohesive:  SA, PR
    SA can measure cohesion but only PR can decide if it is appropriate or not.
  • Standards Compliant:  SA, PR
    SA should be used for code layout and other basic coding standards leaving PR for stuff that SA cannot fully check such as meaningful variable and method names. It is important to agree the standards that SA will check and to configure it appropriately before using it. It is also wise to agree standards such as code layout rules that can be supported easily within whatever IDEs your team prefer to use.
  • Reusable:  PR
    You cannot really test or automatically analyze for reusability.
  • Extensible:  PR
    You cannot test or automatically analyze for extensibility.
  • Well- documented:  SA, PR
    SA can check things like javadoc contents for the right types of entry but the value of comments and documentation is something that only a review can determine.
  • Compatible:  AT, MT
    Compatibility is usually measured by compliance with specific set of test cases, automated and/ or manual.
  • Adaptable:  PR
    You cannot really test or automatically analyze for adaptability
  • Portable:  AT, MT, SA, PR
    All four methods can contribute in different ways to checking portability

Step 5: improving the ways we work

If the team does not practise any form of peer code review then this exercise should convince them that it is something worth trying. If they are already doing code reviews then it is worth looking for ways to improve the effectiveness of these. Good use of static analysis tools should eliminate much of the tedious stuff from code reviews.

Ask the team how they can achieve a better balance of automated and manual testing. The usual answer to the question of, “How much automated testing should we be doing?” is usually,”More than we are currently doing”.

End-users and project sponsors value quality attributes such as correctness, performance and ease of use, but good development teams know that those aspects eventually depend upon maintaining high quality source code even though that is not visible to end-users and project sponsors. As Bertrand Meyer states in Object-Oriented Software Construction“In the end, only external factors matter. … But the key to achieving these external factors is the internal ones”. This is especially true in agile, iterative development approaches where a team is repeatedly building on results delivered in previous iterations and releases.

Prioritising effort to increase the quality for one person’s point of view at the cost of reduced effort for another’s point of view is often a tough balancing act. Too much emphasis on externally visible quality at the cost of internal quality can lead to a build up of technical debt that eventually undermines the ability to maintain externally visible quality. Conversely, overly emphasising internal quality without due emphasis on externally visible quality runs the risk of the software becoming less and less useful and relevant to the people paying for it.

For those working with micro-services, you might want to adapt the exercise to incorporate the infamous 12 factors.

Conclusion

The various different attributes of high quality software and source code need different combinations of tools, strategies and techniques to monitor and improve them. No one single technique or tool is sufficient. In particular, reliance on testing without any form of peer review leaves many quality attributes unchecked. Nevertheless, to avoid consuming too much time doing peer reviews, always select the appropriate level of formality for each piece of work (plans, requirements, design, code and test cases) from brief sanity check to formal inspection to pair-programming.

Further Reading

Old Stuff

DZone

From the Agile Transformation Trenches

InformIT

The Coad Letter