Knowledge That Counts: Points Systems and the Governance of Danish Universities



Download 75.54 Kb.
Page1/4
Date31.01.2017
Size75.54 Kb.
  1   2   3   4
Knowledge That Counts: Points Systems and the Governance of Danish Universities

Susan Wright
Introduction

The term ‘governance’ as applied to universities has more than one meaning. It was once widely used from the fourteenth to sixteenth centuries in England to mean the way an institution like a university was run, how a landed estate or even a whole country was kept in good order, and how an individual conducted business by maintaining ‘wise self-command’ (Oxford English Dictionary 1989 VI:710). In almost all contexts – except universities – these meanings had fallen in desuetude by the eighteenth century, only suddenly to burst back into use in the 1990s. Their decline coincided with governing becoming the specialized role of a ‘government’ which, through the machinery of a centralized bureaucracy, managed the population and economy of a nation state. The resurgence of ‘governance’ in the 1990s heralded a change in the political order, when

‘government’ … becomes less identified with ‘the’ government – national government – and more wide ranging. ‘Governance’ becomes a more relevant concept to refer to some forms of administrative or regulatory capacities. (Giddens 1998: 32–33)

There were three main characteristics of this shift from government to governance in the 1990s. First, instead of the bureaucratic management of a society, governments increasingly accomplished the maintenance of order and the delivery of services through networks of agencies and actors operating on global, national and local scales and including trans-national agencies, international corporations, state and public institutions, arms-length agencies, and civil society organizations (Rhodes 1997). Governments were to encourage enterprise and competition by contracting out service delivery to such networks of partners (known in Canada as alternate service delivery [ASD]) (Osborne and Graeber 1992). Second, what had to be governed were no longer clear organisational structures but this network of often obscure linkages. Contracting organisations were free to manage their own production processes or enter subcontracts with others. Government tried to maintain control through technocratic measures such as setting performance targets and key performance indicators, conducting audits, checking contract compliance, and basing payment on the number and quality of outputs (Dean 1999). Often these technocratic measures acted, in Foucault’s terms, as ‘political technologies’ (Dreyfus and Rabinow 1982: 196) in that the political and ideological aims of government were not made explicit but were embedded in the detailed operations of these apparently politically neutral and purely administrative systems. Third, this system of governing relied on individuals’ freely exercising their own agency, but, often learning from the pedagogies embedded in political technologies, they were to exercise their freedom in ways that achieved the government’s vision of order and contributed to the international success of the competition state (Rose 1989, Pedersen 2011).

This new meaning of governance echoed the old in that it spanned the three scales of the self-management of individuals, the running of institutions, and the ordering of a country, now part of a reconceptualised space of global competition. But between the old and the new meanings of governance there was an important shift in who had the power to define ‘good governance’. It was no longer up to people or institutions to maintain their own ‘wise self-command’ in a bottom-up fashion. Now ‘good governance’ was defined ‘top-down’ and was achieved when the government’s ideas of the proper order of the country were enacted in the management of organizations and the conduct of individuals. The apotheosis of this art of government was to find a single technical measure that would operate on all three scales at once and that would simultaneously order the competitive state, the enterprising organization, and the ‘responsibilized’ individual according to the government’s ideological and political vision.

This chapter will focus on universities, one of the only institutions that has kept alive the original idea of governance when it otherwise fell into disuse.1 In that original sense, governance refers to the array of ways that a university orders its own affairs by managing its relations with the state, maintaining its own internal organization, and instilling certain values and expectations of individual conduct. Now this meaning of governance is overlain by the resurgent meaning, in which it is government that defines the contribution of universities to the competitive state, the ways that the institution should be organized and managed, and the appropriate behaviour for ‘responsible’ academics and students to adopt. As will be discussed in this chapter, the Danish government’s reforms of universities are a good example of the introduction of this top-down form of governance. In particular, the Danish government’s system for allocating a scale of points for different kinds of research publications was a political technology that aimed to bring the ordering of the sector as a whole, individual institutions, and academic staff into alignment. The government used the points system to establish competition for funding between universities, which was considered a necessary pre-requisite for them to perform well on the world stage; it made clear to newly appointed strategic leaders what priorities to set for their organization; and every individual quickly learnt what is expected of them to maximize ‘what counts.’ In short, the points system was an attempt, through a single mechanism, to set up an institutional circuit that took governance from the world stage to the self-management of the individual on the front line and back.

Systems of governance do not always work as designed. The chapter will start by setting out the two strands of thinking that informed the university reforms in Denmark. One strand was the reform of the public sector to create a competition state, and the other strand refocused the work of universities on what the government deemed necessary for Denmark to succeed in a global knowledge economy and maintain its position as one of the richest countries in the world. In both strands of the reforms, performance indicators, such as the points system, became an important mechanism of university governance. The second section summarizes the long process of designing the points system for the government to use in funding algorithms for the sector, and for university leaders to use as a tool of management. The third section is based on fieldwork in a faculty which had long used such points systems. Academics had internalized the system’s priorities, but had also internalized conflicts between their own motivation and the system’s incentives, with resultant high levels of stress. The fourth section, based on fieldwork in another faculty where the points system was a new phenomenon, explores the ways that academics used different combinations of pragmatic accommodation and principled resistance to the system’s imperatives, until finally it was withdrawn.2
Governance and the Global Knowledge Economy

A major reform of university governance in Denmark started with a University Law in 2003. This law was in keeping with the wider reform of the public sector that the finance ministry had been developing since the 1980s (Wright and Ørberg 2008). Called ‘Aim and Frame Steering’ (mål- og rammestyring), ministers were no longer to run the bureaucratic delivery of services. Instead, they were to focus on formulating the political goals for their sector and the legal and budget framework through which they were to be realised. The delivery of these services and the achievement of the political goals were then contracted out to agencies. In a process Pollitt et al. (2001) call ‘agentification,’ parts of the bureaucracy and other state-run organizations, like universities, were turned into such agencies, with the legal status of a person and the power to engage in contracts with the ministry. The ministry steered these agencies by writing clear performance goals into the contracts along with numerical and quality measures for their achievement. For example, the ministry’s contracts with universities contain long lists of the numbers and percentage rise in outputs of graduates and PhDs, publications, externally funded projects, and so on to be achieved within a defined period. The state auditor checks annually the universities’ reports about the fulfilment of these contracted targets. Output and performance measures have also become more important in the allocation of state funding, on which the universities are predominantly reliant. Payments for teaching were already (since 1994) entirely based on the numbers of students who passed their exams each year. Following the 2003 law, the ministry worked on defining and weighting the criteria for increasingly basing the rest of their funding on outputs and for allocating this funding competitively between the universities. As will be shown below, a points system based on the number of publications and proxies for their ‘quality’ became a key mechanism for shifting towards output and performance payments in the government’s new way of steering the university as one of its public sector ‘service providers.’

While these changes to the steering of universities were clearly part of a reform of the whole public sector, the minister for research also tied them closely into a strategy for Denmark’s future economic success. Denmark had been an avid participant in the work of the Organisation for Economic Co-operation and Development (OECD), which through the 1990s promoted the idea that the future lay in a global economy operating on a new resource – ‘knowledge.’ This idea was taken up by other transnational organizations like the European Union (EU), the World Economic Forum (WEF), and the World Bank (WB). They argued that a future global knowledge economy was both inevitable and fast approaching. Each country’s economic survival, they maintained, lay in its ability to generate a highly skilled workforce capable of developing new knowledge and transferring it quickly into innovative products and new ways of organising production. The OECD in particular developed policy guidance for its members (the thirty richest countries in the world) to make the reforms deemed necessary to survive this global competition. It measured and ranked their performance and galvanized national ministers into an emotionally charged competition for success and avoidance of the ignominy of failure.

Universities were thrust centre stage in this vision of the future. They were to ‘drive’ their country’s efforts to succeed in the global knowledge economy. As well as aiming to attract the ‘brightest brains’ through the fast growing and lucrative international trade in students, many governments set a target for 50 per cent of school leavers to gain higher education, and sought to reform education so that students not only acquired high-level cognitive skills, but also the ‘transferable’ skills thought necessary for employment in a global knowledge economy. Policy makers widely adopted the idea that university research should shift from Mode 1 (motivated by disciplinary agendas) to Mode 2 (motivated by social need) (Gibbons et al. 1994). In a bowdlerized version of this argument, the Danish government’s catchword for their university reform was ‘From idea to invoice,’ arguing that academics should develop closer relations with industry and focus on results that would lead to innovations. The OECD developed checklists and tool kits, guidance and best practice to help governments reform universities. These included changing the management of universities to make them capable of entering into partnerships with industry and the state and of delivering the performance these partners expected.

The Danish University Law in 2003 brought the agendas for both the competition state and the global knowledge economy to bear on university management. Whereas previously academic, administrative and technical staff and students had elected the leaders and decision making bodies at every level of the organization, all these were abolished, apart from elected study boards, which continued to be responsible for the design, running, and quality of education programmes. Now a governing board, with a majority of members appointed from outside the university, appointed the rector, like a CEO of a company. He or she appointed deans, who appointed heads of department. In what was called ‘unified management’ (enstrenget ledelse), each leader was accountable to, and had an obligation of loyalty towards, the superior who had appointed him or her, and was no longer, as in the previous structure, primarily accountable to the people he or she led. Although a later amendment required the ‘unified management’ to involve employees in decisions, the faculty and departmental boards and their rights and powers which had involved members of the university in decision making had been abolished. For the first time, the rector now spoke ‘on behalf of’ or even ‘as’ the university, as a coherent and centrally managed organization (Ørberg 2007). This was a clear break with the idea of the university as a community of academics, administrators, and students.

By changing the legal status, state steering, financing, and management of universities, the minister claimed he was ‘setting universities free;’ he was both making them into agencies with the power to enter contracts with the state, industry, and other organizations and he was giving the new leaders ‘freedom to manage’ – it was up to them how they ran ‘their’ organization as long as they delivered on contracts. With the rector as the head of a strongly line-managed and coherent organization, empowered to decide on the strategic use of the university’s funding and acting as an interlocutor with the ministry, politicians, and industry, the minister claimed that government could restore its trust in universities. When, shortly afterwards, the minister initiated mergers between universities and with government research institutes, he felt at least three Danish universities were now capable of appearing within the top ten in Europe measured by one of the world ranking tables (Kofoed and Larsen 2010). In his view, universities now had the kind of organization needed to drive Denmark’s efforts to succeed in the global knowledge economy and could be trusted with increased government funding to that end. A Globalization Council was established by the prime minister and produced a strategy that argued that Denmark’s continuing status as one of the world’s wealthiest countries largely depended on the performance of its universities (Government of Denmark 2006). To achieve this, a ‘Globalization Pool’ during the years 2010–12 substantially increased university budgets. In the government’s view, to incentivize Danish universities to become ‘Global Top Level Universities’, this funding had to be allocated competitively and on the basis of ‘quality indicators’ (Government of Denmark 2006: 22). Right from the start, academics were worried that the indicators would not just be used to establish competition within the sector, but as tools for internal management, to allocate funding between faculties and departments and to incentivize the behaviour and even hire and fire individual staff (Emmeche 2009b). The ministry’s steering group stated explicitly that the ‘quality indicators’ were expected to have an effect on the behaviour of individual researchers, motivating them to publish their research in the most prestigious ‘publication channels’ that can be used to compare research quality internationally (FI 2007; FI 2009b). In the ministry’s task of devising the output indicators and the formula for the competitive funding system, the agendas of the public sector reforms and the preparation for the global knowledge economy came together. By choosing indicators that counted in the world rankings, restructured the sector competitively, and made clear to each individual what counts, it seemed they had found a mechanism which brought these three elements of governance into alignment.


Devising a System for Competitive Allocation of Funding

The process of devising indicators that would mobilise the whole university sector, the internal organisation of each institution and each individual academic and would improve Denmark’s standing in the global university rankings is presented diagrammatically in Figure 1.


from accompanying file>


In autumn 2006, the ministry started to look for ‘quality’ indicators for teaching, knowledge transfer (videnspredning) and research on which to allocate funding competitively between universities. In negotiation with Danish Universities, it was decided that, for teaching, the existing calculation of outputs –the number of students who passed their year’s exams–could also be used as a measure of ‘quality’. This was doubted by some academics who had argued repeatedly that a system which rewarded faster throughput of students with fewer dropouts and fewer failures might improve ‘value for money’ but might also, perversely, incentivise the lowering of standards. The government rejected this argument, claiming it could rely on academics’ professionalism to maintain standards.3 Paradoxically, the government both designed indicators to change academics’ behaviour, but also depended on academics resisting these incentives. The ministry set up working groups to devise new quality indicators for outputs in knowledge transfer (videnspredning) and research. The knowledge transfer working party produced a report that was criticized for poorly defining activities, which ranged from industrial innovation to enhancing public debate and democracy. Eventually, knowledge transfer was dropped as an indicator.

The working party charged with devising an indicator for research quality began reviewing available European models. They rejected the U.K.’s Research Assessment Exercise, based on peer review panels, as too costly in staff time. The Leuven model combined a number of indicators – PhD completions, external funding, and citation rates for publications. Research commissioned by the humanities faculties of Danish universities showed that measures based on commercially produced citation indexes were inappropriate for the humanities, as humanities faculty published very little in the international journals covered by those firms (Faurbæk 2007).4 It was agreed that there should be one measure for all disciplines. Therefore, the working party adapted the Norwegian model (Schneider 2009), which allocated differential points to journal articles, chapters in edited volumes, and monographs depending on whether they were ‘top level’ or not and peer reviewed or not. In this model, ‘quality’ is not assessed directly but relies on the journal’s or publisher’s peer-reviewing and ‘international’ status (defined as in an international language and with under two-thirds of contributors from the same country). The Australian system of auditing and ranking universities called Excellence Research for Australia (ERA) entailed similar ranked lists of journals until the minister cancelled them at the last minute. He said this was because university managers were using the lists in an ‘ill-informed and undesirable way’ to set academics targets for publications in top ranked journals (Carr 2011). In contrast, the Danish government’s aim was for managers and academics to treat measures as targets.



The Danish model required all academics to enter their publications into their university’s database each year. These would be put together as a national database and points allocated to each publication according to an authorized list of which journals and publishers were ‘level 1’ or ‘level 2’. Level 2 journals were defined as the leading international journals that published the top 20 per cent of the ‘world production’ of articles in a field. To create this authorized list, in late 2007 the ministry, with the agreement of Danish Universities, set up 68 disciplinary groups involving 360 academics. They delivered their lists to the ministry in March 2009. The ministry found that the same journal could appear on two lists at different levels – presumably because it was central to one discipline but more peripheral to another. When the ministry published its consolidated list on its web site, immediately 58 of the 68 chairs signed a petition saying it was not an appropriate tool for distributing funding and asking the ministry to remove the list from its web site (Forskerforum 2009a; Richter and Villesen 2009). One disciplinary group found 89 of the journals they had put in the ‘lower level’ had been upgraded to ‘top level’ whilst 30 of their most important journals had been downgraded (Richter and Villesen 2009). In another disciplinary group, seven coffee table magazines suddenly appeared in the ‘top level.’ No Danish journals or Danish publishers appeared as ‘top level’ at all, disadvantaging subjects such as Danish language, literature, history, and law (Larsen, Mai, Ruus, Svendsen and Togeby 2009). Overall, one per cent of all the journals academics had selected as important had disappeared (Larsen et al. 2009). The press confronted the minister, who admitted, ‘It’s not as easy as one may think to make a ranking list of 20,000 journals,’ and the list disappeared from the ministry’s web site (Richter 2009; ForskerForum 2009b). The discipline groups were asked to re-work their lists, but this time each journal was allocated to a specific discipline to avoid overlaps. They delivered their lists again in September 2009, but 32 of the disciplinary group chairs signed a statement that they could not vouch for this indicator and their advice was not to use it for funding allocation (Emmeche 2009b: 2). The disciplinary groups had worked for two years and still only listed journals; there were no lists of all the publishing houses for monographs and edited volumes relevant to each discipline, let alone decisions about which of them were ‘level 1’ and ‘level 2.’ The ministry therefore published the ideal version of the points system alongside a ‘temporary’ one. By default, the temporary list seems to have become permanent. It notably downgraded the points for monographs and edited volumes, which are the publication outlets used predominantly by the humanities (see Table 1).
Table 1. Danish publications points system

Form of publication

Low level

Top level

Temporary’

Scientific monograph

5 points

8 points

All: 6 points

Article in scientific journal

1 point

3 points

Unchanged

Article in edited volume

with ISSN number



1 point

3 points

All: 0.75 points

Article in edited volume

0.5 point

2 points

All: 0.75 points

In addition, a PhD thesis initially earned two points, a ‘habilitation’ or professorial thesis five points, and a patent one point. Source: FI 2009b
(Later PhD theses were removed from the points system to avoid them counting twice, as ‘completed PhDs’ was already a category used for the distribution of the block grant (See Table 2).
Now that the ministry had its lists and could calculate the research points for each university each year, it had to decide what weight to give these points in the funding allocation model. An allocation model had already been developed in the late 1990s, based on 50% for teaching, 40% for external funding and10% for PhD completions, but this was only used to distribute marginal amounts in an ad hoc fashion (Schneider & Aagaard 2012: 195). Now the ministry proposed that research points should be given a 50 per cent weighting, teaching 30 per cent, and knowledge transfer 20 per cent, but Danish Universities rejected this. In 2009 Danish Universities finally suggested (echoing the Leuven model) that the indicators should be teaching (45 per cent), PhD completions (ten per cent), and research (45 per cent). But they argued that research should be divided into 35 per cent for funding from external sources (e.g., contracts with industry or grants from the research council) and the research publication points should only be given a ten per cent weighting, although this would increase gradually to 25 per cent. The government agreed to this proposal (FI 2009a).



Share with your friends:
  1   2   3   4


The database is protected by copyright ©dentisty.org 2019
send message

    Main page