Performance indicators PDF Print E-mail
Ray Thomas, 14 June 2006

The Preface of the 1999 White Paper Building Trust in Statistics written by Tony Blair boasts that the paper seeks a ’ new relationship with citizens based on openness and trust’.   The Preface says that statistics will ‘allow people to judge whether the Government is delivering on its promise’.  The Preface adds ‘Our aim is to build a platform that will establish the UK as world leader in the provision of quality statistical information’.

 

The Preface well illustrates Blair’s gift for persuasive gab. It is so easy to imagine Blair seductively using these words in a TV interview − wholly convincing the interviewer and ninety-nine percent of the TV audience.    It is easy to imagine how the masterly touch − my Government will be judged in the same way – is presented with a shy self-deprecating smile expressing willingness to be a martyr to the justice of being judged by statistics.   But Blair’s Preface also illustrates that the gift of the gab can disguise self-contradictory arguments and unrealistic aspirations.

The endorsement given by Tony Blair to the use of statistics as indicators of the performance of the Government ushered in an era in which measurement of performance has become the norm in most organizations in the public sector.  The use of statistics as performance indicators can be said to have increased openness.  But the use of performance indicators has also reduced trust in statistics and in the organizations that produce them.  There is often suspicion that organizations have been diverted from their prime functions into the pursuit of statistical targets. 

 

Advocating the use of performance indicators can be seen as ideologically parallel to advocating policies designed to extend competition in the private sector of the economy.  The profits made by private firms are measures of success or failure in the market place.  Profitable firms grow. Unprofitable firms stagnate, are taken over, or go bankrupt.   

 

In the same kind of way performance indicators can be portrayed as measures of success of individuals and organizations in the public sector.    Those who score well should be rewarded in some way.   Those who score poorly should become the focus of attention for management takeover or have their responsibilities transferred to other organizations.  

 

But there is an important difference.   The profit criterion used to argue for extension of the private sector is supported ideologically by Adam Smith’s concept of an invisible hand.   The invisible hand identifies the unintended consequence of profit seeking by individual firms as leading to benefit of society through the production of goods and services.    But the main point of performance indicators are that they are visible.  Visibility supports the production of statistics, such as league tables, that give a measure of external control.  But visibility can also give rise to a variety of unintended consequences that are not of benefit to society.  

 

In the 1990s most commentators had major reservations about the use of targets and performance indicators, Peter Smith at York University, one of the few statisticians interested in performance indicators at that time, wrote of the unintended consequences stemming from the operation of performance indicator systems identifying tunnel vision, sub-optimisation, myopia, measure fixation, misrepresentation, gaming and ossification.  Smith also pointed to the possible neglect of unmeasured aspects of performance and/or changes in the nature of the performance measured. 

 

(Smith, Peter C (1995) 'On the unintended consequences of publishing performance data in the public sector', International Journal of Public Administration, 18, 2/3: 277-310)

 

But Blair’s endorsement triggered new growth.   In a few areas, like major heart surgery, a non-government organization has taken control of the production and presentation of the performance statistics.   But generally growth in the use of performance indicators since 1999 has vastly extended the influence of statistics and the exercise of Government and managerial powers into the lives of its citizens, and especially in the activities of public service workers. 

 

The case of the use of performance indicators for major heart surgery is instructive.  In this area the issues had been well explored in the United States.    One problem is that crude indicators based on the percentage of successful operations militate against the measured performance of surgeons who accepted high-risk patients.    The solution developed in Britain was for surgeons to make their own indicators that took into account such measurement problems and that gave them control over the way the statistics were presented.   Such systems allow for the statistics to be used for self-management enabling individual surgeons to understand how their performance compares with that of other surgeons and groups of surgeons.

 

(See http://heartsurgery.healthcarecommission.org.uk/information-for-patients.aspx)

 

Measuring the performance of heart surgeons is a far from typical use of performance indicators.   The Government wants national tables able to distinguish, for example, the best and worst ten schools on some measure of performance.   To support such tables the heads of schools have to compile tables for classes and perhaps individual teachers.   Teachers have to keep records of their pupils’ performance.   

 

Such systems can be characterised as hierarchical.  Indicators can be used as a tool of management by Government departments.   A cascade of indicators can be used at other levels of management to manage or control subordinates or subordinate organisations.  At every level another component is added to management powers.   But it is not clear that the performance indicator system for schools, or the systems that have come into existence for most other public sector organizations, allow for the use of performance indicators for self-management on the lines of the system under development for major heart surgery.

 

Professional statisticians have given qualified support to the widespread use of performance indicators.  A Royal Statistical Society group has examined such questions as the objectivity of performance indicators and devised tools for management to help separate random variations in an indicator from a significant change.  But the RSS Report assumes managerial use and does not identify or discuss the use of performance indicators for self-management.

 

(Royal Statistical Society, Performance Indicators: the good, the bad and the ugly, at: http://www.rss.org.uk/PDF/PerformanceMonitoring.pdf) 

 

The RSS Report and many other observers point to three kinds of behavioural problems that can be associated with the use of performance indicators.   It is acknowledged that the subjects of performance indicators can give too much weight to the indicator and not enough of the primary task that the indicator is supposed to measure, it suggested that use of statistics as a performance indicators may increase the risk of falsification of the statistics and it is pointed out that the focus on performance indicators for some aspects of performance can lead to a neglect of those aspects of performance that are not measured.  

 

The Government’s Consultation Document on the proposed legislation for an independent statistical service quotes with approval the United Nations Fundamental Principles for Official Statistics: 

 

“Official statistics provide an indispensable element in the information system of a democratic society, serving the government, the economy and the public with data about the economic, demographic, social and environmental situation. To this end, official statistics that meet the test of practical utility are to be compiled and made available on an impartial basis by official statistical agencies to honour citizens’ entitlement to public information.”

 

The use of statistics for performance indicators is not inconsistent with these principles.  But the managerial use of performance indications does not accord with public perceptions of an information system for a democratic society.   Rather are such uses reminiscent of the economic systems of soviet societies – where the setting of targets and a system of rewards for achieving targets and penalties for failure were prime instruments for managing the economy.

 

Blair’s apparently innocent invitation to judge whether the Government is delivering on its promise encourages the adoption of performance indicators.   Performance indicators support managerialism not democracy.   The failure of the Consultation Document, or public discussion, to distinguish between the use of statistics as performance indicators and other uses of official statistics is dangerous.  

 

Strengthening the statistical system for performance indicators would add the danger that statistics would be falsified in order to meet targets and can be expected to reduce trust in official statistics.   The enshrinement of performance indicators into a legal framework would also have wider implications.  The substitution of statistics for what should be based on human judgments would be sleepwalking towards a soviet type of command society.

 

 

Ray Thomas is Research Fellow in Official Statistics at the Open University.