Accelerated academy: evaluation, acceleration, and metrics in academic life
30 November-2 December 2016, Leiden (Scheltema, Marktsteeg 1)
Sarah de Rijcke, Centre for Science and Technology Studies, Leiden University
Björn Hammarfelt, University of Borås, Sweden | Leiden University
Alex Rushforth, Centre for Science and Technology Studies, Leiden University
From the 1980s onward, there has been an unprecedented growth of institutions and procedures for auditing and evaluating university research. Quantitative indicators are now widely used from the level of individual researchers to that of entire universities, serving to make academic activities more visible, accountable and amenable to university management and marketing. Further demands for accountability in academia can be related to general societal trends described under the heading of the audit society (Power 1997), and the evaluation society (Dahler-Larsen 2011). As part of broader transformations in research governance, indicators on publications and citations are now permeating academia: from global university rankings to journal-level bibliometrics such as the journal impact factor and individual measures like the h-index. Yet, it is only recently that considerable interest has been directed towards the effects that these measures might have on work practices and knowledge production (c.f. de Rijcke et al. 2015), and the role they might be playing in accelerating academic life more generally (c.f. Vostal 2016).
The Accelerated Academy draws together a number of cross-disciplinary conversations about the effects that acceleration towards metric forms of evaluation is having upon research, and the implications this holds for living and working in contemporary academia (Felt et al. 2009). Building on the successful maiden edition of the Accelerated Academy series in Prague in 2015, this year’s Leiden conference will be especially focussed towards the following questions:
- What does acceleration mean in different research contexts?
- What are the implications of digitally mediated measurement and tools for quantifying scholarly performance?
- What are the knowledge gaps regarding the effects of metrics on scientific quality and societal relevance of research?
- How can we harness the positive and minimize the adverse effects of performance measurement in universities?
Ulrike Felt (University of Vienna) – Of timescapes and knowledge scapes: Re-timing Research and Higher Education
Peter Dahler-Larsen (University of Copenhagen) – The Evaluation Society and Academia