Morley, Elaine, Harry P. Hatry, and Elisa Vinson. Outcome measurement in nonprofit organizations: Current practices and recommendations. Washington, DC: Independent Sector, 2001.


KEY RECOMMENDATIONS

Types of Outcome Information Collected by Nonprofit Organizations

  1. Regularly (at least annually) collect and tabulate data on at least one outcome for each program or service. It is usually preferable to collect data on more than one outcome. Aggregate the data in the form of numerical indicators by expressing the outcome indicators as the number or percent of a specific measurement. Aggregating data across clients makes data more useful, for example, by enabling organizations to track changes over time. Aggregated data is also easier to communicate to external audiences.
  2. Attempt to collect information on the condition of clients both at the end of services and some time after services have been completed in order to track a program’s results over time. Clients, family members, staff, or trained observers can often provide information on client condition. Organizations should also consider following up on clients who have dropped out of their programs.
  3. Collect information on outcomes that reflect customer satisfaction with overall services and with specific aspects of service quality.

Data Collection Procedures for Measuring Outcomes

  1. For most health and human services organizations, client surveys should be considered a primary means of obtaining information on both client condition and client satisfaction with services.
  2. When surveying clients, organizations should take steps to encourage response in order to achieve adequate response rates. Common practices to improve response rates include multiple mailings of questionnaires, multiple follow-up phone calls, and provision of incentives for completing the questionnaire. A 50 percent response rate is adequate. To obtain adequate representation, organizations should survey all of their participants or a reasonable random sample.
  3. Data collection instruments should be tested when they are new or when they are being used with a new type of respondent for whom the instrument may not have been designed. Use a pilot test to determine whether respondents similar to the target audience understand the wording of questions, as well as whether the questions measure the outcomes that the organization is attempting to measure.
  4. Organizations providing direct services to clients should, when possible, maintain records on each client, including demographic characteristics, types and amounts of program services provided, beginning status or condition levels, progress made during the program, and outcomes after the program. This will enable the agency to develop outcome information that can help the agency continually assess the outcomes achieved for different types of clients and for each of its service approaches.
  5. Organizations seeking to make long-lasting improvements should collect post-service information on clients or environmental conditions three, six, nine, or twelve months after program completion. Twelve-month (or later) followups are preferable because they provide better evidence that the organization’s help was enduring. Post-service condition information should, when possible, be compared with similar information obtained at clients’ entry in order to obtain indicators such as number and percent of clients whose condition improved substantially. To make follow-ups feasible, organizations may take such steps as keeping contact information for clients up-to-date (for example, by verifying the information each time the client is in contact with the agency) and placing more emphasis on client “after-care” so that client status is monitored periodically.
  6. Use volunteers or contributed time of professionals to reduce labor costs associated with various aspects of outcome measurement.
  7. Use mail survey questionnaires for client surveys, when feasible, rather than telephone or in-person interviews. Mail surveys, even after multiple mailings, are an inexpensive way to collect information about changes in client conditions and about satisfaction with services.
  8. Keep questionnaires and other data collection instruments simple, especially when beginning outcome measurement. Organizations are often tempted to continually add data items to be collected, but doing so may reduce client response rates and overly tax an agency’s ability to process and analyze the data. Wait until the agency has gained experience and has resources available to handle the extra information before adding items to data collection instruments.
  9. Take appropriate steps to maintain client confidentiality. For data collection procedures that require participation by clients, especially when information on sensitive topics is sought, or when data are obtained from children, it may be necessary to obtain consent from clients or their parents.

Analysis of Outcome Information

  1. Organizations should examine their outcome data for (a) time trends, (b) differences among major categories of clients (such as gender, age, race/ethnicity) as appropriate, (c) differences among similar service units or service procedures within the agency, (d) differences among similar organizations, and (e) differences from targeted values. Client groups whose outcomes are worse than others should be highlighted for possible action, as should units with outcomes poorer than those achieved by similar service units.
  2. Analyze program outcomes by reviewing information from more than one data source. Programs often survey multiple stakeholders or use multiple measures to assess similar outcomes. For example, youth development programs may survey the youths served, their parents, and their mentors to assess youths’ progress in a program.
  3. Someone on the agency staff should be responsible for providing an interpretation of the outcome data contained in each outcome report. Indicators whose values are substantially improved or better than expected should be highlighted. Values that are worse than expected should be examined for potential reasons and be identified as needing improvement. Provide explanations, even if only conjectural, as to the reasons for disappointing outcomes and for those that were unexpectedly good.
  4. Consider experimenting to find ways to improve outcomes, perhaps by using different service delivery approaches or by implementing small pilot programs and monitoring changes in indicator values against an unmodified program. When experimental changes are successful, make similar modifications throughout the program and monitor for positive results. If they are not successful, consider conducting additional experiments.

Reporting and Use of Outcome Information

  1. Prepare regular written reports on outcome indicators. Reports should be clear and user-friendly. Avoid presenting data in formats that make information difficult to read. Do not crowd too much information on a page, especially in reports for external audiences. Make selective use of graphic presentations such as bar charts and line graphs. Clearly define each indicator where the data for it are presented, and identify the source and date for all data used and presented. Present explanatory information to help readers understand why some data are disappointing and to put unexpectedly good outcomes in perspective. Avoid using technical jargon.
  2. Distribute outcome data regularly to all personnel who are in a position to affect services. Provide at least quarterly reports for internal use. Hold “How are we doing?” meetings between managers and staff to discuss the data and identify reasons for indicator values, particularly those that are especially high or low. Use these meetings to brainstorm possible program modifications to help achieve better outcomes.
  3. Develop and implement action plans aimed at resolving problems indicated by the most recent outcome reports. When reviewing later outcome reports, assess whether the actions taken appear to have helped and make modifications as appropriate. Use breakouts (by key client demographic characteristics) and comparisons recommended in chapter four to help identify where programs are working well and where not so well.
  4. Promote accountability by reporting outcome information at least annually to customers, the general public, funders, and government agencies with responsibility for services the agency provides. In this way organizations can document the progress they are making, as well as ensure donors that their resources are being well spent. Including outcome information in an agency’s annual report is one way to promote widespread distribution of outcome data for accountability purposes. Make sure the reports are easily accessible to the general public, perhaps through local libraries.
  5. Web sites and other electronic media use for inexpensive dissemination of outcome information. However, not all populations have equal access to the Web, so it should not be used as the sole means of report dissemination. Web site reporting allows organizations to use colorful presentations, such as multi-colored bar charts, that are often prohibitively expensive in printed documents.
  6. Exercise caution before making major changes based on outcome information. Double-check data for accuracy and look for explanatory information. In some cases, there may be errors in the data, the data may have been collected inappropriately, or data may not accurately reflect the desired outcome. For example, one youth services organization discovered that the lack of improvement in scores on its pregnancy prevention post-tests appeared to be related not to the program’s effectiveness in providing relevant information but to the low reading skills of many participants.