Evaluation research aimed at determining the overall merit, worth, or value of a program or policy derives its utility from being explicitly judgment-oriented. The basic purpose of both measurement assessment and evaluation is to determine the needs of all the learners. As part of this review, we aim to explore the following questions: What are the reasons behind trying to understand and evaluate research impact? Definition of evaluation. They aim to enable the instructors to determine how much the learners have understood what the teacher has taught in the class and how much they can apply the knowledge of what has been taught in the class as well. To adequately capture interactions taking place between researchers, institutions, and stakeholders, the introduction of tools to enable this would be very valuable. The definition of health is not just a theoretical issue, because it has many implications for practice, policy, and health services. There are a couple of types of authorship to be aware of. Evaluative research has many benefits, including identifying whether a product works as intended, and uncovering areas for improvement within your solution. 2009), and differentiating between the various major and minor contributions that lead to impact is a significant challenge. One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts. 0000009507 00000 n What is the Difference between Formative and Summative Evaluation through Example? Every piece of research results in a unique tapestry of impact and despite the MICE taxonomy having more than 100 indicators, it was found that these did not suffice. A university which fails in this respect has no reason for existence. It has been suggested that a major problem in arriving at a definition of evaluation is confusion with related terms such as measurement, To achieve compatible systems, a shared language is required. It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline (REF2014 2012). These sometimes dissim- ilar views are due to the varied training and background of the writers in terms of their profession, concerned with different aspects of the education process. This raises the questions of whether UK business and industry should not invest in the research that will deliver them impacts and who will fund basic research if not the government? On the societal impact of publicly funded Circular Bioeconomy research in Europe, Devices of evaluation: Institutionalization and impactIntroduction to the special issue, The rocky road to translational science: An analysis of Clinical and Translational Science Awards, The nexus between research impact and sustainability assessment: From stakeholders perspective. 60 0 obj << /Linearized 1 /O 63 /H [ 1325 558 ] /L 397637 /E 348326 /N 12 /T 396319 >> endobj xref 60 37 0000000016 00000 n 0000001087 00000 n What emerged on testing the MICE taxonomy (Cooke and Nadim 2011), by mapping impacts from case studies, was that detailed categorization of impact was found to be too prescriptive. As a result, numerous and widely varying models and frameworks for assessing impact exist. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. 0000001178 00000 n In many instances, controls are not feasible as we cannot look at what impact would have occurred if a piece of research had not taken place; however, indications of the picture before and after impact are valuable and worth collecting for impact that can be predicted. , . 2009). Impact can be temporary or long-lasting. According to Hanna- " The process of gathering and interpreted evidence changes in the behavior of all students as they progress through school is called evaluation". This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. In education, the term assessment refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students. Although some might find the distinction somewhat marginal or even confusing, this differentiation between outputs, outcomes, and impacts is important, and has been highlighted, not only for the impacts derived from university research (Kelly and McNicol 2011) but also for work done in the charitable sector (Ebrahim and Rangan, 2010; Berg and Mnsson 2011; Kelly and McNicoll 2011). 0000001325 00000 n Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed. HEFCE indicated that impact should merit a 25% weighting within the REF (REF2014 2011b); however, this has been reduced for the 2014 REF to 20%, perhaps as a result of feedback and lobbying, for example, from the Russell Group and Million + group of Universities who called for impact to count for 15% (Russell Group 2009; Jump 2011) and following guidance from the expert panels undertaking the pilot exercise who suggested that during the 2014 REF, impact assessment would be in a developmental phase and that a lower weighting for impact would be appropriate with the expectation that this would be increased in subsequent assessments (REF2014 2010). By allowing impact to be placed in context, we answer the so what? question that can result from quantitative data analyses, but is there a risk that the full picture may not be presented to demonstrate impact in a positive light? This is a metric that has been used within the charitable sector (Berg and Mnsson 2011) and also features as evidence in the REF guidance for panel D (REF2014 2012). 2005). The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. What are the methodologies and frameworks that have been employed globally to evaluate research impact and how do these compare? Media coverage is a useful means of disseminating our research and ideas and may be considered alongside other evidence as contributing to or an indicator of impact. << /Length 5 0 R /Filter /FlateDecode >> In 200910, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) (REF2014 2010). Assessment refers to the process of collecting information that reflects the performance of a student, school, classroom, or an academic system based on a set of standards, learning criteria, or curricula. Thalidomide has since been found to have beneficial effects in the treatment of certain types of cancer. To demonstrate to government, stakeholders, and the wider public the value of research. SIAMPI is based on the widely held assumption that interactions between researchers and stakeholder are an important pre-requisite to achieving impact (Donovan 2011; Hughes and Martin 2012; Spaapen et al. In the educational context, the . 2007). From 2014, research within UK universities and institutions will be assessed through the REF; this will replace the Research Assessment Exercise, which has been used to assess UK research since the 1980s. The most appropriate type of evaluation will vary according to the stakeholder whom we are wishing to inform. While assessments are often equated with traditional testsespecially the standardized tests developed by testing companies and administered to large populations . Findings from a Research Impact Pilot, Institutional Strategies for Capturing Socio-Economic Impact of Research, Journal of Higher Education Policy and Management, Introducing Productive Interactions in Social Impact Assessment, Measuring the Impact of Publicly Funded Research, Department of Education, Science and Training, Statement on the Research Excellence Framework Proposals, Handbook on the Theory and Practice of Program Evaluation, Policy and Practice Impacts of Research Funded by the Economic Social Research Council. These case studies were reviewed by expert panels and, as with the RQF, they found that it was possible to assess impact and develop impact profiles using the case study approach (REF2014 2010). 0000334683 00000 n 2009; Russell Group 2009). By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. To allow comparisons between institutions, identifying a comprehensive taxonomy of impact, and the evidence for it, that can be used universally is seen to be very valuable. The process of evaluation is dynamic and ongoing. 0000342980 00000 n Figure 1, replicated from Hughes and Martin (2012), illustrates how the ease with which impact can be attributed decreases with time, whereas the impact, or effect of complementary assets, increases, highlighting the problem that it may take a considerable amount of time for the full impact of a piece of research to develop but because of this time and the increase in complexity of the networks involved in translating the research and interim impacts, it is more difficult to attribute and link back to a contributing piece of research. A key concern here is that we could find that universities which can afford to employ either consultants or impact administrators will generate the best case studies. Evaluation of impact in terms of reach and significance allows all disciplines of research and types of impact to be assessed side-by-side (Scoble et al. We suggest that developing systems that focus on recording impact information alone will not provide all that is required to link research to ensuing events and impacts, systems require the capacity to capture any interactions between researchers, the institution, and external stakeholders and link these with research findings and outputs or interim impacts to provide a network of data. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research. Impact is not static, it will develop and change over time, and this development may be an increase or decrease in the current degree of impact. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. 2007). 2007; Nason et al. In the majority of cases, a number of types of evidence will be required to provide an overview of impact. New Directions for Evaluation, Impact is a Strong Weapon for Making an Evidence-Based Case Study for Enhanced Research Support but a State-of-the-Art Approach to Measurement is Needed, The Limits of Nonprofit Impact: A Contingency Framework for Measuring Social Performance, Evaluation in National Research Funding Agencies: Approaches, Experiences and Case Studies, Methodologies for Assessing and Evidencing Research Impact. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that inclusion of impact in the REF will create selection pressure, promoting academic research that has more direct economic impact or which is easier to explain to the public (Corbyn 2009). Time, attribution, impact. working paper). Organizations may be interested in reviewing and assessing research impact for one or more of the aforementioned purposes and this will influence the way in which evaluation is approached. While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. In the UK, evaluation of academic and broader socio-economic impact takes place separately. An evaluation essay or report is a type of argument that provides evidence to justify a writer's opinions about a subject. To be considered for inclusion within the REF, impact must be underpinned by research that took place between 1 January 1993 and 31 December 2013, with impact occurring during an assessment window from 1 January 2008 to 31 July 2013. The Author 2013. Cooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks of impacts that are generally found. The case study approach, recommended by the RQF, was combined with significance and reach as criteria for assessment. There are areas of basic research where the impacts are so far removed from the research or are impractical to demonstrate; in these cases, it might be prudent to accept the limitations of impact assessment, and provide the potential for exclusion in appropriate circumstances. Key features of the adapted criteria . Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. From an international perspective, this represents a step change in the comprehensive nature to which impact will be assessed within universities and research institutes, incorporating impact from across all research disciplines. only one author attempts to define evaluation. 4. What are the challenges associated with understanding and evaluating research impact? This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. The inherent technical disparities between the two different software packages and the adjustment . From the outset, we note that the understanding of the term impact differs between users and audiences. Inform funding. Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. 0000001883 00000 n 0000004692 00000 n In the Brunel model, depth refers to the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users. The Goldsmith report (Cooke and Nadim 2011) recommended making indicators value free, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. 1. Definition of Evaluation "Evaluation is the collection, analysis and interpretation of information about any aspect of a programme of education, as part of a recognised process of judging its effectiveness, its efficiency and any other outcomes it may have." Mary Thorpe 2. This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain (Mugabushaka and Papazoglou 2012). 0000008241 00000 n Assessment for learning is ongoing, and requires deep involvement on the part of the learner in clarifying outcomes, monitoring on-going learning, collecting evidence and presenting evidence of learning to others.. Replicated from (Hughes and Martin 2012). In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered.