Quick Search

Training

Featured Text

 

Gudiance: Essential Perspectives for Improving Performance Measures: Users and Their Uses of Performance Information

This web-friendly version of this guidance article was edited for faster online reading, with links added in various places to additional details in a “Supplement” on another web page. You can also download a longer PDF version of the article with all details integrated into one document as well as a free auditor tool (DOCX) to help put this guidance to use.

PAUL EPSTEIN, Principal, AUDITOR ROLES PROJECT, June 2015

There has been increasing interest and efforts around the world—from local governments to the United Nations—in increasing the usefulness of public performance information.  Government auditors have played an important role in this quest over the years, especially by auditing the relevance or reliability of performance measures and data.  This practice is central to Role 2 of the Framework of Auditor Roles and Practices in Performance Measurement: “Assessing the quality of performance information or performance reports.”

 

The Most Essential Step to Assess Usefulness is to Ask the Users

Clear criteria for assessing the relevance and reliability of performance information have been established, in which useful is a major criterion of relevance.  There are many steps auditors can take to assess and help improve the usefulness of performance information, relating to several possible audit objectives. However, the most basic step of all, and essential to any assessment of “usefulness,” is to ask the intended users.  Perspectives of intended users will help auditors learn specific user needs for information, how well those needs are met, and barriers to using data most effectively.  Auditors can also use information they capture on user perspectives to determine cost-effective ways to meet their needs better so intended users can make more effective use of performance information.

 

Determining Which Internal and External Intended Users to Ask

To develop a manageable sample of people to consult, determine groups of intended users of performance information.  For internal performance reporting systems, intended users may be various managers and field staff.  Other systems may be intended for non-executive elected officials (e.g., city council members, state or provincial legislators), their staff, and the public. A comprehensive system is likely to provide a broad range of performance information, with some performance measures intended primarily for “front line” internal users (e.g., operational indicators) and some intended primarily for higher-level managers and external users (e.g., high-level outcomes, costs, and revenue).  Except for internal systems of very small agencies, it is rarely possible to consult every intended user in a single audit.  So an auditor can be practical and take a sampling approach to consult with a reasonable mix of user interests including a “cross section” of intended users.

Internal Users

Internally, a cross section of intended users can involve, for example, at least a few front-line employees, several levels of supervision and management, perhaps up to a department head or chief executive, and staff from cognizant oversight agencies such as a budget office.

External Users

Externally, a cross section of intended users can include a sample of legislators serving on committees that oversee the agency audited and on a committee with a potentially broad interest in using performance data, such as an appropriations or budget committee. (If the legislature is a small city council or county board, it may be practical to consult all members.) If members of the public are intended users, then it is useful to identify groups with significant interests in the performance of the agency audited. For example, neighborhood groups would be useful to consult about performance measures of a neighborhood services agency. For some agencies, it would be wise to extend consultation to groups with interests beyond service customers to those with other interests in how an agency performs, such as businesses regulated by environmental or consumer affairs agencies.  The idea is not to be exhaustive, but to engage people or groups with a reasonable mix of significant interests related to performance of the agency audited. If the audit covers an entire multi-agency general purpose government, rather than just one or two agencies, an auditor can identify a sample of “most likely” users representing a variety of interests, though not all, as described here.

 

Collecting Data on “Usefulness” from Intended Users

Each intended user can respond to performance information in different ways, have different uses for the information, and have different reasons for using the data or not.  So, ultimately, there is no substitute for one-on-one interviews or in some cases small group interviews with intended users.  But if the sample of intended users is too large to personally interview them all within the scope of an audit, other techniques may also be used to broaden the base of users, such as focus groups or surveys.  For example, a cross-section of management, a few key legislators, and members of two or three civic groups known to use performance data may be targeted for interviews.  Members of other groups who are potential users of performance information could be invited to focus groups to learn if there are users and uses in the community beyond those already known, and how well these intended users are served by the performance information. The focus groups and interviews may help determine common existing uses, barriers to use, and changes that could make performance information more useful to various users, as described in these examples.

If the audit scope for this phase will allow it, consultation can be broadened further through a survey of additional intended users or user groups, especially to build upon interview and focus group results to obtain additional qualitative information from a wider range of intended users.

 

In Interviews, Ask Users to Document or Demonstrate Their Uses

To learn how performance information is really used, and how valuable it is for each use, in addition to asking users to describe their uses, ask them to SHOW YOU how they use it.  You can build this into each user interview, and alert intended users in advance that if they do use performance information, you’d appreciate it if they would demonstrate or document their use in some way. You can suggest in advance two or three options for users to show you their uses. For example, an audit office’s invitation to a user might include:

“If you use performance information, we would appreciate it if you can show our auditor how, by either:

  • Providing backup documentation showing examples of analyses or decisions based on the information.
  • Or providing a ‘walk through’ of computer screens or pages of reports to cite specific measures and how you use them for specific purposes.”

In this part of an interview, try for a level of detail that makes each use is very clear (e.g., if this measure is 15% above target, we find out why, and may adjust schedules to improve timeliness; if this measure is 10% low, we look for practices to change to improve quality). 

This part of each interview is intended both to substantiate actual uses for your audit report, and, just as important, to help you, as an auditor, thoroughly understand each user’s uses of performance information to help identify ways to improve the usefulness of performance information.

 

Analyzing Intended User Information to Find Ways to Increase Usefulness

After information has been collected from at least an initial set of users, it is useful to start summarizing highlights from interview notes, as well as any focus group notes or survey results received to that point. In particular, from each user or user group, highlights should summarize:

  • Uses: Both actual uses and other desired uses (additional ways they would like to use performance information)
  • For Each Actual Use: Summary highlights of how they actually use performance information (e.g., Use: Improve service quality; How it’s done: Review quarterly data with supervisors and make minor adjustments to improve quality)
  • Barriers to Other Desired Uses: Why have they not yet used performance information this way?
  • How well their needs for performance information are met and why: Does performance information adequately serve the intended uses? If not, why not?
  • How to improve usefulness: What changes, if any, may lead to more effective use of performance information by the user or user group?

You can enter summarized highlights for each user group into a table to help analyze results across users, such as in the table in this free downloadable audit tool (DOCX) provided by the Auditor Roles project, complete with an example and suggestions for analyzing and using results.  You might start building the table after a few interviews, and use what you learn from the partial analysis to inform some of your questions in later interviews, focus groups, or surveys.  When you’ve obtained and entered information from all intended users on your list, you can use such a table to do a broad analysis across users to learn valuable insights for your audit.

 

Considering What to Include in Findings and Recommendations

After analyzing information collected from all intended users consulted, an auditor can develop findings and recommendations concerning how to improve the usefulness of performance information. Here are some suggestions for what to consider when doing so:

Give the Organization Audited “Credit” for Substantiated Uses in Audit Findings

It’s important to report findings of real uses that have been substantiated, to indicate that the organization is getting some value from performance information.  That would suggest that the cost and effort to collect and report data may be worthwhile, even if the full value of the data has not been realized. Limitations that make these uses less effective than they could be may be cited within these findings or reported separately as other findings in an audit report.  Existing substantiated uses can be summarized into a few (e.g., three to five) major types of use in a report. Examples of major types of uses include:

  • Performance accountability (e.g., through targeting and monitoring performance vs. targets)
  • Policy development and improvement, including budget development and performance budgeting (e.g., involving trade-offs between budgeted resources and targeted performance levels)
  • Resource allocation within existing budgets, such as determining where to deploy field personnel, when to schedule personnel, or which facilities to target for specific types of repair or maintenance
  • Other performance monitoring and improvement uses, such as use in processes to improve outcomes, quality, timeliness, customer satisfaction, efficiency, cost savings, or revenue increases.

Assuming the Audit Office wants to encourage more use of performance measurement, not less, then it is usually a good strategy to give positive “credit” to agencies for all their substantiated uses in report findings, even if only minimal uses are found.  Other potentially valuable uses not realized would then be presented in report findings as “opportunities for improvement” that would “build on existing uses” rather than presented as negative findings, for reasons discussed here.

Keep in Mind Likely Cost-effectiveness of Changes to Performance Information When Reporting Findings and Recommendations

All potential changes that can significantly improve the usefulness of performance information to some users can be worth citing in audit findings, preferably grouped in some way, such as by type of use as noted above, or type of issue (e.g., timeliness of reporting, data disaggregation). When suggesting how to act on findings, keep in mind that there is always a cost to change. So, draw on your analysis of user responses to develop ideas on the relative costs of different possible changes and the relative value of each possible change in improving decisions, accountability, and performance. Then, however many changes you recommend, be sure to point out which changes may be most cost-effective and why.  That can help the organization audited set priorities for improvement.

The supplement to this article suggests several things to look for and consider to determine cost-effective changes.  And the same downloadable audit tool (DOCX) noted above to assist in analysis of user information includes a detailed example with summary information from eight user groups, and draws upon that user information to illustrate several considerations for developing findings and recommendations.

 

Numerous Examples and Tools for Assessing Relevance and Reliability Available

In addition to this article and related tool, auditors from across North America have shared their experiences in assessing relevance or reliability of performance information with the Auditor Roles project, and many stories of their exemplary practices are available at this website.  They have also been generous in sharing their guidance papers, audit steps and programs, and other auditor tools for assessing relevance or reliability.  Also, if you are interested in assessing relevance and reliability, you are likely to be interested in our earlier articles on how auditing performance information adds value to performance auditing and on criteria for assessing the relevance and reliability of performance information.