Performance indicators: good, bad, and ugly (2005):
Sheila M. Bird, Sir David Cox, Vern T. Farewell, Harvey Goldstein ,Tim Holt, Peter C. Smith
Journal of the Royal Statistical Society: Series A (Statistics in Society), Volume 168, Number 1, January 2005 , pp. 1-27(27).
Please note: This page may contain data in Norwegian that is not translated to English.
Type of publication:
Tidsskriftsartikkel
Link to publication:
http://www.rss.org.uk/PDF/PerformanceMonitoring.pdf
Link to review:
http://www.blackwell-synergy.com/doi/abs/10.1111/j.1467-985X.2004.00333.x
Number of pages:
27
Language of publication:
Engelsk
Country of publication:
UK
NSD-reference:
2468
This page was last updated:
30/7 2007
Land som er gjenstand for studien:
- Storbritannia
Verkemiddel i den operative styringa av ststlege verksemder:
- 2.1 Formell styringsdialog
- 2.3 Styringssystemer og -verktøy
Andre verkemiddel i den konstituerande / operative styringa:
- 3.1 Forvaltningsrevisjon og interne evalueringar
- 3.2 Eksterne evalueringar
Studieoppdrag:
- Begge deler
Studietype:
- Effektstudie/implikasjoner/resultater
Type effekt:
- Kostnadseffektivitet
- Verdimessige effektar
Sektor (cofog):
- Staten generelt
Summary:
A striking feature of UK public services in the 1990s was the rise of performance
monitoring (PM), which records, analyses and publishes data in order to give the public a better
idea of how Government policies change the public services and to improve their effectiveness.
PM done well is broadly productive for those concerned. Done badly, it can be very costly
and not merely ineffective but harmful and indeed destructive.
Performance indicators (PIs) for the public services have typically been designed to assess
the impact of Government policies on those services, or to identify well performing or underperforming
institutions and public servants. PIs’ third role, which is the public accountability of
Ministers for their stewardship of the public services, deserves equal recognition. Hence, Government
is both monitoring the public services and being monitored by PIs.
Especially because of the Government’s dual role, PM must be done with integrity and
shielded from undue political influence, in the way that National Statistics are shielded. It is
in everyone’s interest that Ministers, Parliament, the professions, practitioners and the wider
public can have confidence in the PM process, and find the conclusions from it convincing.
Before introducing PM in any public service, a PM protocol should be written. This is an
orderly record not only of decisions made but also of the reasoning or calculations that led to
those decisions.A PM protocol should cover objectives, design considerations and the definition
of PIs, sampling versus complete enumeration, the information to be collected about context,
the likely perverse behaviours or side-effects that might be induced as a reaction to the monitoring
process, and also the practicalities of implementation. Procedures for data collection,
analysis, presentation of uncertainty and adjustment for context, together with dissemination
rules, should be explicitly defined and reflect good statistical practice. Because of their usually
tentative nature, PIs should be seen as ‘screening devices’ and not overinterpreted. If quantitative
performance targets are to be set, they need to have a sound basis, take account of prior
(and emerging) knowledge about key sources of variation, and be integral to the PM design.
Aspirational targets have a distinctive role, but one which is largely irrelevant in the design of
a PM procedure; motivational targets which are not rationally based may demoralize and distort.
Anticipated and actual side-effects of PM, including on individuals’ behaviour and priorities, may
need to be monitored as an intrinsic part of the PM process.
Independent scrutiny of PM schemes for the public services should be set up and must report
publicly. The extent and nature of this scrutiny should be related to the assessed drawbacks
and benefits, reflect ethical concerns, and conform with good statistical practice.
Research is needed into the merits of different strategies for identifying institutions or individuals
in the public release of PM data, into how new PM schemes should be evaluated, and into
efficient designs for evaluating a series of new policies which are monitored by PIs.
The Royal Statistical Society considers that attempts to educate the wider public, as well
as policy makers, about the issues surrounding the use of PIs are very important. High priority
should be given to sponsoring well-informed public debate, and to disseminating good practices
by implementing them across Government.