糖心Vlog

Does university assessment still pass muster?

Most universities still rely on exams and assessed essays to grade their students. But as the fourth industrial revolution, employability and student satisfaction all rise up the agenda, many experts are suggesting that assessment needs to much more closely resemble real-world tasks. Anna McKie marks the arguments   

Published on
May 23, 2019
Last updated
May 23, 2019
Source: Getty

What is the defining image of the academic side of undergraduate life? For many centuries, it has surely been the student bent over the exam hall desk or library table, scribbling furiously. And although the modern library image would more accurately feature a computer screen with a vast amount of tabs open, the time spent sweating over exams or essays remains most graduates鈥 abiding memory of pursuing their degree. Assessment is at the heart of university life, and has a significant impact on what and how students learn 鈥 and, ultimately, what they go on to achieve.

And while there have been some innovations in assessment practice over the past decade, 鈥渢here is still a huge reliance on closed book examinations and coursework essays in most subjects鈥, according to Neil Morris, director of digital learning at the University of Leeds.

There are many in higher education who have long believed that this 鈥渓earn and regurgitate鈥 assessment formula is not conducive to true learning. One recent paper that looks into alternative ways of examining students points to a wealth of research showing that memorisation is the 鈥渓owest level of learning鈥, as students quickly forget what they memorise. The , 鈥淯sing principles of authentic assessment to redesign written examinations and tests鈥, published in January in Innovations in Education and Teaching International, notes that there is a strong knowledge-testing culture in many global regions, including South America, South-east Asia and the Middle East. However, 鈥渕emorisation ill-equips students for the complex demands of life and work [because it makes it] difficult to engage in deep learning鈥.

According to Morris, the traditional focus of universities鈥 assessment practices on 鈥渒nowledge recall, reasoning and structured writing鈥 has the benefit of being scalable to large cohorts of students, allowing universities to ensure that each individual meets strict criteria for degree standards.

糖心Vlog

ADVERTISEMENT

However, while this approach does cover some of the essential abilities needed in the workplace, it does not match employers鈥 increasing needs for a wider set of skills as the fourth industrial revolution begins to unfold, with the internet allowing instant access to vast swathes of knowledge and as artificial intelligence develops to the point where it can take on many traditional graduate roles.听In Morris鈥 view, this means that the focus of modern universities should move away from knowledge retrieval and retention and on to knowledge assimilation, problem-solving and team-working.

Phil Race, a visiting professor of education at Edge Hill and Plymouth universities, agrees. For him, the cramming that traditional exams encourage is 鈥渇ar from how we access information in real life now, and will [remain anachronistic] until we have the internet in exams鈥.听But, according to Jon Scott, pro vice-chancellor for student experience at the University of Leicester, it is important for universities to teach students to evaluate 鈥渉ow credible and reliable鈥 various online knowledge sources are likely to be.

糖心Vlog

ADVERTISEMENT

鈥淭hose skills are really important and it鈥檚 very important that those are tested,鈥 he says. 鈥淭he days where there was one textbook and you had to learn it are long gone.鈥 And while all disciplines have a 鈥渇undamental knowledge base that students just have to know鈥 鈥 memory of which can legitimately be tested in an exam 鈥 the extent of that knowledge base varies by subject. Medical or law students, for instance, must be able to recall a fairly large amount of information without having to look it up if they are to be successful in the workplace. But other, less vocational subjects do not have the same imperative.

鈥淭he most important thing is making assessment much more akin to real life,鈥 Scott says. 鈥淪ome disciplines are closer to that already, while some have quite a long way to go.鈥

厂辞耻谤肠别:听
Getty

Over the years, many universities have adopted assessed essays as an alternative to exams, on the grounds that it allows for more considered reflection and removes the advantage offered by traditional exams to those who deal better with the pressure and can write faster. But Scott notes that, for the majority of subject areas, essays are not necessarily any more closely related to the way that knowledge is used in real life than exams are.

鈥淔or example, [in the workplace] you might need to write something very succinct. Writing something short is actually quite challenging and students should learn how to get their message across in that format,鈥 he says. 鈥淎nother area is how to handle data and how to present it, as the level of misuse of data by politicians and the press, for example, is frightening. You could do a timed activity under exam conditions, such as an interpretation of a piece of information or a piece of data, which requires students to demonstrate knowledge and use it effectively.鈥

With such an array of potential approaches, how is a university department to decide which to adopt? According to Emma Kennedy, education adviser (academic practice) at Queen Mary University of London, they need to ask: 鈥淲hat does success look like? What does your outcome at the end of the degree look like?鈥 The answers to those questions "can be a bit lost in the way we assess current students鈥, she says.

鈥淥ne thing that is really valuable that we don鈥檛 often do is rewarding students for taking notice of feedback and improvement,鈥 she says. One way of doing that is known as ipsative assessment. It assesses students based on how well they have improved on their last assignment, rather than competing against their peers. It has yet to take off widely, but a 2011 paper by Gwyneth Hughes, a reader in higher education at the UCL Institute of Education, argues that adopting it would focus university assessment on genuine learning. According to the , 鈥淭owards a personal best: a case for introducing ipsative assessment in higher education鈥, published in Studies in 糖心Vlog, 鈥渋psative feedback has the potential to enable learners to have a self-investment in achievable goals, to become more intrinsically motivated through focusing on longer term development and to raise self-esteem and ultimately performance. An ipsative approach also might encourage teachers to provide useable and high quality generic formative feedback.鈥

For Jesse Stommel, executive director of the division of teaching and learning technologies at the University of Mary Washington, a public liberal arts and sciences university in Virginia, most assessment mechanisms in higher education don鈥檛 assess learning 鈥 despite universities鈥 claim that this is what they value most. 鈥淢eaningful learning resists being quantified via traditional assessment approaches like grades, academic essays, predetermined learning outcomes and standardised tests,鈥 he says. Exams, meanwhile, are at their best when they are used as a formative tool for learning, he says.

鈥淥ur approaches treat students like they鈥檙e interchangeable鈥ut not every student begins in the same place,鈥 Stommel says. He believes that an approach called 鈥渁uthentic assessment鈥 is more meaningful and enables students to take ownership of their learning, and to collaborate rather than compete with each other.

糖心Vlog

ADVERTISEMENT

The idea of authentic assessment is at the crux of the debate about assessment in higher education, especially as the fourth industrial revolution transforms our ideas about what is needed from the modern graduate. It is focused on testing 鈥渉igher-order skills鈥, such as problem-solving and critical thinking, through tasks that are more realistic or contextualised to the 鈥渞eal world鈥.

Opinion varies on what exactly constitutes authentic assessment, but almost all experts agree that appropriate exercises mimic professional practice, such as group activities or presentations.

Stommel says authentic assessment also gives students a wider audience for their academic work, such as their peers, their community and potentially even a digital audience. 鈥淔or example, I鈥檓 a fan of collaborative assessment, which allows the students the opportunity to learn from and teach each other,鈥 he says.

In some disciplines, authentic assessment is already in full effect. The 鈥渙bjective structured clinical examination鈥 is a standard method used in medical schools. Students are observed undertaking particular practical procedures, such as taking a patient鈥檚 history or doing a blood test, allowing them to be evaluated on areas critical to healthcare professionals, such as communication skills and the ability to handle unpredictable patient behaviour.

Students also appear to appreciate authentic assessment. A 2016 by researchers at Deakin University, 鈥淎uthentic assessment in business education: its effects on student satisfaction and promoting behaviour鈥, published in the journal Studies in 糖心Vlog, looked at the use of authentic assessment in an undergraduate business studies course and found that it increased student satisfaction, especially among those who are highly career-oriented.

厂辞耻谤肠别:听
Getty

Redesigning assessment in this way has also been promoted as a method to reduce contract cheating. Although essays are able to test more than rote memorisation, they are particularly prone to this kind of cheating, and a estimated that as many as one in seven students had used the services of an essay mill.

To combat that problem, Louise Kakti艈拧, a lecturer in the linguistics department at Macquarie University in Sydney, proposes the use of more in-class assessments, particularly exams.

鈥淔or all subjects鈥t should be compulsory to have both mid-semester exams and final exams,鈥 Kakti艈拧 writes in a 2018 , 鈥淐ontract cheating advertisements: what they tell us about international students鈥 attitudes to academic integrity鈥, published in Ethics and Education. 鈥淭he only way to see the level of students鈥 output is to ensure that the majority of gradable work is incorporated into exams because an exam is the only thing that students cannot acquire via contract cheating,鈥 she adds.

Thomas Lancaster, senior teaching fellow in Imperial College London鈥檚 department of computing, and Robert Clarke, a lecturer at Birmingham City University, have written extensively on contract cheating and have advocated the introduction of face-to-face examinations, or an oral component to complement written assessments.

糖心Vlog

ADVERTISEMENT

Meanwhile, an , 鈥淐ontract cheating and assessment design: exploring the relationship鈥, published in Assessment & Evaluation in 糖心Vlog, found four assessment types that are perceived by students to be the least likely to be outsourced to essay mills: in-class tasks, personalised and unique assignments, vivas, and reflections on placements. However, according to the analysis of survey responses from 14,086 students and 1,147 educators in eight Australian universities, these are also the least likely forms to be set by educators. Only between a quarter and a third of lecturers said that they set such exercises with at least moderate frequency.

In 2016, the Australian government commissioned the same researchers to investigate the use of authentic assessment to tackle the problem. The resulting paper,鈥淒oes authentic assessment assure academic integrity?:听Evidence from contract cheating data", forthcoming in 糖心Vlog Research and Development, analysed 221 assignment orders to essay writing services and 198 assessment tasks in which contract cheating was detected. The results show that all kinds of assessment 鈥 authentic or not 鈥 were 鈥渞outinely outsourced by students鈥.

鈥淎 lot of research and advisory documents from higher education quality bodies advise academics to use authentic assessment design as a way of ensuring academic integrity,鈥 says Cath Ellis, associate dean (education) at UNSW Sydney. However, the research shows that 鈥渋t doesn鈥檛 do what most people think it does. One thing you would assume was that if authentic assessment design did make it possible to design out contract cheating you wouldn鈥檛 see orders placed on contract cheating websites for these kinds of assessments. But, lo and behold, we found that assessment tasks were being contract cheated that were both authentic and not.鈥

Ellis points out that a lot of the academic articles making the claim that authentic assessment prevents cheating do not offer any evidence to substantiate it. 鈥淲e hypothesise that people think that it will be effective because it makes assessment tasks so compelling or enjoyable that students won鈥檛 want to cheat,鈥 she says.

The study's results do not entail that designing assessment effectively won鈥檛 reduce the use of essay mills or that academics shouldn鈥檛 set authentic assessment tasks, Ellis stresses. But universities should not become complacent and 鈥渟top looking for [cheating] or talking to students about it鈥.

Moreover, educators must not focus all their attention on designing assessment to stamp out contract cheating because that would be to the detriment of education and those who do not cheat, Ellis adds. In addition, authentic assessment is more time consuming for those who implement it, Ellis admits, and it can be hard to convince those working within the higher education sector to change, particularly at large, ancient institutions.

For Race, a better idea would be to switch to assessment via computer-based activities (with full access to the internet). This, he says, would have advantages over both exam- and essay-based approaches in that it would neither disadvantage slow handwriting nor offer any opportunity for cheating: 鈥淲e know whose work it actually is if the technology can verify who鈥檚 using it and when,鈥 he says.

But Kennedy of Queen Mary points out that there are external factors that can make changing the way universities do things difficult. For example, there are rules in the UK about being clear to students about what assessment they will experience and making sure that students on comparable degrees are assessed comparably.

Universities are expected by governments to rank students on their ability upon graduation, and employers want to know what particular degree classifications actually mean. Students, too, want to know what the value of their degree is, especially now that they are paying high fees, Kennedy says. 鈥淲e don鈥檛 want to see them as consumers and they don鈥檛 necessarily see themselves as customers 鈥 which is commendable 鈥 but they are paying a lot of money and they do see it as an investment.鈥

All of this makes it difficult for universities to unilaterally transform their assessment practices.

厂辞耻谤肠别:听
Getty

Part of the concern around the meaning of degree classifications relates to the grade inflation that is perceived to have become rife in higher education in recent decades. The concern has been particularly high in the UK, with some suggesting that so many students now get a 2:1 or first-class degree that it would be better to move to a more granular way of grading them, such as a US-style grade point average, which gives an average mark based on assessments throughout the undergraduate years.

Observation of rising GPAs in the US has diminished initial hopes that such a switch could help address grade inflation, but advocates point to other benefits. A major one is that a GPA system lessens the stakes at exam time; although UK universities do implement assessments throughout the undergraduate years, a much higher weighting is accorded to end-of-year exams.

According to Kennedy, education scholars in the UK have recently become interested in the idea of a whole-programme approach to assessment. 鈥淢ost [university departments] think about how they assess at a modular level, but students don鈥檛 take modules [in isolation]. It鈥檚 worth thinking about how different kinds of assessments are balanced across the programme,鈥 she says. 鈥淧ractically, you want to assess everything at the end, but that can put a lot of stress on students.鈥

That is particularly true for disabled students. Under a recent Twitter hashtag, 鈥渨hy disabled students drop out鈥, one student wrote that she found her degree's 10 days of three-hour final exams 鈥 which were the 鈥渂e-all and end-all鈥 of her course 鈥 too much to handle. 鈥淎ssessments (almost) entirely exam based and pushed together at the end are inherently able-ist,鈥 she wrote.

Mary Washington鈥檚 Stommel agrees. 鈥淥ur approaches need to be less algorithmic and more human, more subjective, more compassionate,鈥 he says. 鈥淢ost of our so-called objective assessment mechanisms fail at being objective and prove instead to reinforce bias against marginalised populations.鈥

Moreover, Leicester鈥檚 Scott adds that 鈥渟tudents need to know how they're getting on. If you don't have any assessment till the end, it鈥檚 very hard for them to tell.鈥 For this reason, carrying out formative assessment throughout the learning process is very valuable 鈥 both for students and staff, allowing tutors to identify those who might need additional support. The problem,听Scott warns, that it is often challenging to get students to engage with formative assessments if they don鈥檛 count towards the end degree result.

Race thinks that the solution is for universities to adopt a broad array of more learner-centred approaches to assessment, with 鈥渢he mystery removed鈥 such that students are able to 鈥減ractise self-assessing and peer-assessing to deepen their learning of subject matter鈥. He also thinks teaching staff should be required to undertake continuing professional development to keep up with advances in assessment and feedback. This should both increase the quality and streamline the amount of assessment, he says.

But it is clear that such a world remains a long way off. As things stand, Race regularly meets 鈥渕any lecturers for whom the burden of marking students鈥 work and giving them feedback has spiralled out of control鈥.听

It seems that everyone听鈥撎齭taff, students and employers alike听鈥 would benefit from a fresh approach.听

糖心Vlog

ADVERTISEMENT

anna.mckie@timeshighereducation.com

POSTSCRIPT:

Print headline: Time toget real

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (15)

This article seems focused on humanities and other 鈥渆ssay based鈥 areas in which exams do (maybe) focus on 鈥渞egurgitating facts鈥. However it seems to be totally unaware of the nature of assessment in mathematics and physical sciences in which examinations mainly test problem solving and calculations skills. Testing this sort of thing through coursework or other open assessments causes difficulties as students can easily copy others鈥 work, or just get someone else to do the assignment . Of course, much assessment is already through other means than examinations, where this is appropriate (eg laboratory work) but there seems to be no good argument to change the balance.
A very timely article, particularly when we seem to be suffering from a moral panic over "authenticity". My only concern is that much of this work has been going on for over 20 years and it is only just being noticed by "education advisers" (let alone policy makers). I make no apologies for signposting colleagues to the work of David Boud - Boud, D., 2007. Reframing assessment as if learning were important. In Rethinking assessment in higher education (pp. 24-36). Routledge. Focusing on learning would be a good starting point.
No need for apologies - Boud is great - but just a small point: 'education adviser' is the title of an educational developer: you seem to think it's some sort of management or policy role, but it's an academic role. I notice the work, because I do the work. I'm in the field. It... sounds like you aren't?
Thanks Emma, sorry if this seemed to diminish your role. My point here is that advisers or developers are part of a small community with a specific teaching focus - where is the rest of the academic community?
Meanwhile the secondary education system has moved back to 100% exams because the above methods don't seem to work. Admittedely there's a lot of right-wing idedology that influenced this change but it's also worth thinking through the benefits of exams before disparaging them. Surely that is critical thinking at its best?
For final degree classifications, a useful tool is a 'capstone project' - like the final year project common in STEM subjects, where the student spends a large amount of time during their final year working on an independent extended piece of work with supervision from an academic: then present that work in a 'demo' as well as writing a report/dissertation about it which is marked. It works well in computer science (my discipline, and I'm the final year project tutor), but I'm not sure how well it would work in the humanities. However, that doesn't address the issue for the entirity of a student's career in university. As the article states, there are some facts in any discipline that need to be LEARNED, however much you want to concentrate on interpretation of information rather than rote learning of it. We have to remember that those determining such things are those who rose to the top of the educational system as it is, and to step outside of "it worked for me" can be quite hard.
Universities are really the oddest place for teaching/learning workplace skills. The best place for that is 鈥 the workplace. Universities should focus on cultivating scholarship: an array of critical-analytical-creative skills of mind, adaptable to all contexts, but sadly threatened and diminished since the rise of crass utilitarianism in the 1980s.
Agreed! There should be no training in universities but employers will happily let the taxpayer fund this if we do not keep universities as places of learning. Nobody can surely contend that university is any more than the start of the process of building expertise even for an academic.
A long comment on a narrow subject. Some of the most valuable bits are in the comments. One size of assessment will not fit all. Some degrees are highly vocational. STEM subjects differ from humanities. We first need to decide what is the "higher purpose" of a University Education and only then consider the role of an Undergraduate Degree. ( which might be undertaken elsewhere than at a University) We need to define what outcomes and outputs we seek from a degree before we can design a better way of achieving those objectives. Preventing cheating should be way down the list of considerations when designing the process of assessment
I see a lot of things worth aspiring to here. However, there are several factors that act as obstacles, some of which are outlined in the article. The first is legislation and the quality regime. Another comes down to identity politics: the idea of ipsative assessment presupposes that you know the student and have ways of gauging their personal progress (it is also best served by some level of ongoing contact by the same few members of staff). This is anathema to those who support anonymous marking regimes. The idea of programme-driven assessment is very compelling but very hard to bring about when you have high levels of programme flexibility and students from different programmes taking the same classes (unless you are prepared to test them differently according to programme; viz. the first point above). Another point comes down to resources: in a mass education system, we need to work with assessment types that are economical, trustworthy, and have been proven to work. Finally (for now) we assume that teaching staff are equipped with the will and expertise to put into place new forms of assessment. This assumption, as many, is problematic.
Essay assessment is also not without flaws. Too many students get their way around by utilising the essay mills. In fact many wealthy middle class students' parents go out of their way to employ dedicate postgraduate students or unemployed post docs to help the undergrad students to write up the essays (one of my former student's father employed an unemployed post doc on a full-time contract for GBP 拢30,000 with performance related bonus to ensure the student gets the top grades. People are getting around to be fool proof from the usual filters for plagiarism and out of pattern style writing authorship which would've been picked up by softwares eg Turnitin etc which I have seen that the gamers of the system always seem to be a step ahead of the sophisticated technological measures against cheating. The old fashion way of manually sifting through students' works and calling them to the tutor's room for quick chat on grounds of suspicion, though such method is more reliable and trusted, is just not practical or workable, given the sheer number of students and administrative workload the tutors and professors are already burdened with. (We struggle to even meet the marking deadline of the assignments on time; many just glimpse through with the second additional marker at times relying on first marker's observation as he or she just can't be bothered to look through or as it is more often the case, being overburdened with other work duties, often prioritising the works of graduate students or final year dissertation/thesis). A better solution would be to introduce classroom seminar participation and active critical engagement in the classes as part of the overall assessment, all of which should be graded alongside other usual methods. In fact such method of grading classroom participation and interactive engagement are already utilised at some American universities/colleges and in most postgraduate professional schools in the US eg law schools providing the JD programme. Many academics are all but giving up hope on the essay style assessments being the correct indicator for students. As a former lecturer/professor, I don't have much faith on the essay assignment system. An alternative should be essay assessments which either should be ungraded (ie pass or fail only, with constructive feedback) or essay assessment should be conducted as formative assessment not counting towards the final degree grading classification so not to make students pressurised to aim for the ultimate grade at any cost, which drives them towards cheating their way out/gaming the system.
Many of us have been listening to views on assessment for a considerable number of years. In my time I've seen views and ideas come and go; as well as their protagonists. New ideas were, however, little more than old repackaged thoughts linked to those other problems coming from the 'new age' which had to be 'managed' by the emerging new kids on the block. One such problem was, and still is, massification. It's simply futile discussing any different assessment pattern without discussing how time hungry and how expensive it is. Managers will naturally support any innovative approach as long as it requires the same, or fewer, resources. 'Ipsative' assessment, for example, requires the assessor to know the student and to remember the work ... !! The time required for such assessments would devour an academic's time and could lead to him/her skimming the work and inventing the feedback unless extra resources are allocated. Please don't be naive if you don't think such things happen. Additionally, trying to justify linking a student's work to an imaginary 'real world', in to which they will eventually emerge, is just silly. We should stop using this phrase. Employers, collectively, can't agree on what a skill is, let alone articulate what they want from a graduate. They may be able to offer definitions for their own purposes but there is no common ground and no industry voice. Even if there were, why should we 'train' students for industry needs when they should be doing that themselves. Training is not our brief, it's theirs ... and they should fund it. Any assessment pattern which purports to mimic reality and to prepare a graduate for employment is quite simply a fraud. As I've previously written, our brief is to provide employers with the educated raw material, and they should do the rest.
The article rehearses many of the drivers of assessment choice - tradition, funding (almost all alternatives to exams will cost more to set up or perfect), a focus on other things that academics do that bring rewards (and its not assessment). One, as yet, unexplored driver is digital technology. We should not simply think about traditional assessments (exams and essays) being facilitated by IT, we should think about the other key things that technology can assist and that we expect in our graduates and then develop ways of valuing them. Just because we have used exams for years does not mean that they are the right tool to use.
The argument I hear time and time again is that "they NEED to know this stuff and prove they know via an essay based exam," or "they NEED a chunky piece of writing to really get into the detail". And then most UG students (science based) come out able to ramble on spectacularly for writing research papers that take 12 pages to get anywhere resembling a point, but still use Google for the actual knowledge and are unable to write concisely to get an important point across. Yet module/unit leaders are all too often unwilling to budge as to the importance of "their" assessment, without proper consideration of the bigger picture and the ways in which we access knowledge now compared to even the 90's. That "perceived innovative ideas are rehashed versions of old ones" is a nonsense counterpoint to change - yes, they aren't actually "new", but they may work better now than then because the sector, the students, and the technology is so different in comparison you may as well be talking about not using a mobile because they tried that in the 80's and it just wasnt that good. I now run an online open book unseen exam in final year to try to better mimic the kind of expectations and environment and means graduates may experience once they're finished - and it makes marking and feedback SO much easier.
"Moreover, educators must not focus all their attention on designing assessment to stamp out contract cheating because that would be to the detriment of education and those who do not cheat", Ellis adds The biggest detriment to students who do not cheat is the fact that those who do cheat get better grades and outcompete the honest students for jobs.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT