Quick Search

Training

Featured Text

 

Auditor Roles Project Revises Sample Criteria for Relevance and Reliability of Performance Information

June 2010

PAUL EPSTEIN, Principal
EPSTEIN & FASS ASSOCIATES

Auditors can make valuable contributions to improving government performance management by helping government agencies improve the relevance and reliability of performance measures and data.  Then, public officials can have confidence that they are using the right performance measures to make decisions and their decisions will be based on reliable data.  And citizens can have confidence in public performance reports. The independence and data skills of auditors make them uniquely qualified to assess the relevance and reliability of performance information.  This practice is central to Role 2 of the Framework of Auditor Roles and Practices in Performance Measurement: “Assessing the quality of performance information or performance reports.” And it can also come into play, in different ways, in all five auditor roles of the framework.  So, it is crucial that auditors use sound criteria for assessing relevance and reliability.  In 2007, the Auditor Roles Project developed sample criteria drawn from sources such as the Governmental Accounting Standards Board (GASB) and the American Institute of Certified Public Accountants (AICPA), and offered them for auditors’ consideration and discussion in numerous training sessions across North America.  Auditor feedback was positive, indicating that these were practical, useful criteria auditors could directly apply, or could adapt for the entity they audit.  However, Auditor Roles Project Principals recently re-examined these sample criteria, and decided to revise them to be clearer, more practical, and “ready to use.”

 

Relevance Applies to Sets of Measures, Reliability to Individual Measures and Data

One of the first clarifications was to make the following distinction:

  • Relevance applies to sets of measures. Multiple measures are needed to adequately represent the performance of almost any government agency. One cannot assess “relevance” by considering each measure individually.  Relevance criteria can only be applied to the entire set of measures used to judge the performance of a program, service, or agency.
  • Reliability applies to individual measures and the data for each measure. Unlike relevance, reliability criteria should be applied individually to each measure assessed and to the data for each of those measures.

These distinctions are reflected in the specific new sample criteria that follow.

 

New Sample Criteria for Assessing the RELEVANCE of Performance Measures

The measures for a program, service, or agency should be:

  • Aligned: Linked to mission, goals, and objectives.
  • Complete: Include essential aspects of performance.
  • Useful: Timely, understandable, comparable, responsive to change, meet the broad needs of users.

 

New Sample Criteria for Assessing the RELIABILITY of Performance Measures and Data

Each measure and its data should be:

  • Accurate: Computed correctly, neither overstated nor understated, appropriately precise.
  • Valid: Corresponds to the phenomena reported, correctly defined, data and calculation comply with the definition of the measure, data are unbiased.
  • Consistent: Consistent with previous reporting periods, controlled by adequate systems.

Improvements over Previous Sample Criteria

Auditors who attended project training sessions since 2007 will recognize many of the same ideas in the project’s previous sample criteria. However, the new criteria are more concise, and are organized differently and reworded for clarity and to avoid misinterpretation.  For example:

  • No ideas were dropped from the previous relevance criteria, but the first criterion was changed simply to “Align” to make it clear that alignment with goals and objectives is a necessary aspect of relevant measures.
  • The previous relevance criteria had aspects of “usefulness” spread over four different criteria. In the new criteria, these aspects are combined to form the definition of “Useful” as a major criterion, to make it clear that the usefulness of the measure is an important test of relevance, and to provide several ways to assess usefulness.
  • The previous reliability criteria emphasized “Impartial/Fair” as a major criterion. While impartiality and fairness are important, they really have to do with how measures and data are reported, not on whether they are reliable.  Two aspects of the former “Impartial/Fair” criterion were kept (“unbiased” and “appropriately precise”), but made a part of other criteria.
  • The new reliability criteria emphasize “Valid” as a major criterion, and make the previous criterion “Correctly Defined” part of the definition of “Valid.”  It is very important that a measure be correctly defined, and checking data and calculations against the definition is an essential step in any test of reliability.  But “validity” is really the bigger idea that definitions relate back to.

Numerous Examples and Tools for Assessing Relevance and Reliability Available

Auditors from across North America have shared their experiences in assessing relevance or reliability of performance information with the Auditor Roles project, and many stories of their exemplary practices are available at this website.  They have also been generous in sharing their guidance papers, audit steps and programs, and other auditor tools for assessing relevance or reliability.  Also, if you are interested in assessing relevance and reliability, you are likely to be interested in our earlier article on how auditing performance information adds value to a broader performance auditing practice.