Having said that, it is interesting to note that a state has very little measure than its achievement! Hence most Quality Metrics aim at measuring our adherence to the defined path (Process). But, do retrospective measures prove effective in ensuring, if not enhancing, quality? Could we actually be looking at the whole problem the wrong way round?
I believe a direct measure of the reliability of the planning done to ensure the achievement of a desired state would be more effective than a measure of adherence. I do not intend to downplay Tracking, but adherence to a faulty plan would logically lead to an undesired if not faulty result! In short, the quality achieved at the end of an activity can only be reliably determined if the corresponding planning artifacts are reliable. The following example could give you a clearer view of this approach.
Assume a scenario where billing is effected on the basis of estimates (ASSUME!). It would naturally follow that teams would attempt to provide reliable estimates to guarantee steady billing. Such reliable estimates would automatically warrant a well-defined and accurate means of deriving the size of a project; such objective sizing would in turn demand a clearly defined scope; and the stress on effective Scoping would subsequently translate to better Requirement Specifications.
I know what you’re thinking; Teams will end up trying to bloat estimates while the client makes it a point to shrink them to their limit. But I believe that this very conflict could result in the creation of an objective and well-defined estimation model approved by both parties. Also, such an approach would enhance not only the reliability of estimates, but also adherence as any positive variance would go unbilled.