UML modelling as part of an enterprise's software design infrastructure requires rules and guidelines. At its most simple, important model elements like classes, interfaces, and use cases should have at least a short textual description explaining what they are and why they exist. When used to generate elements of a service-oriented architecture, or to generate source code for a particular execution framework, the rules and guidelines for models become more complex. The right package structure, the existence of classes with certain specific stereotypes, well-formed state machine diagrams, and so on.
Manually checking that a model is compliant with these rules and best practice is a considerable chore and overhead. This is where Together's model audits and metrics can make life easier.
Together's model audits are Object Constraint Language (OCL) expressions that equate to either true or false for a particular type of model element called the audit's context. When invoked each audit is run against all the elements in the model that match the context of the audit. Each time the execution of an audit equates to false, an entry is made in a list of failed audits (or audit hits as they are sometimes called). All the audit failures are displayed in Together's Model Audit view (see figure 1).
Figure 1: Together 2007 Model Audit View
Each entry in the list displays a single line description of the audit, the model element that fails the audit, and the severity of the failure; an error because a rule has been broken, a warning if a guideline has not been followed, or just a information message highlighting some poor cosmetic aspect of the model. From each entry in the audit results, it is easy to navigate to the offending model element in either the Together Model Navigator view or the diagram editor.
The entries can be sorted and grouped by the various columns in the view and the results can be exported to a comma or tab separated file, an HTML file or an XML file that can be reloaded later if desired.
Together's model audits can be run against one or more projects, a subset of a project, or even an individual element.
The model audits can also be run from the command line and, therefore, can form part of a regular, frequent or continuous build process.
Where model audits are a pass/fail check, model metrics are counts. A model metric consists of an OCL expression that returns an integer usually representing the number of entries in some collection of model elements. The metric reports the integer result and indicates whether it is above or below the specified range of values for that measure.
Audits check whether a model obeys the rules and guidelines. Metrics check that a model remains manageable, indicating areas where it may be growing too tightly coupled, too complex or loosing cohesion.
Together's Model Metrics view lists the results of running th metrics against either whole projects, packages or individual model elements. Like the audits there are various navigation, sorting and export options available in the Model Metrics view. Hovering over a column heading with the mouse pointer shows the brief description of the metric in a tool tip and if of interest the OCL expression for the metric can be revealed in the bottom part of the view.
Figure 2: Together 2007 Model Metrics View
Just like the model audits, the model metrics can also be run from the command line.
Defining Model Audits and Metrics
Together 2007 enables new model audits and metrics to be defined. This is done via the Eclipse preferences framework. Together provides for export and import of the model audit definitions so that a set of audits and metrics can be defined and distributed to a group of users (see figure 3).
Together 2007 Audit Definitions