糖心Vlog

The THE-Microsoft survey on AI

What are university leaders and chief technology officers doing to meet future challenges?

Published on
March 28, 2019
Last updated
March 4, 2020
Woman looking at robot
Source: Getty

The robots are coming. Future-gazers have been making that prediction at least since Alan Turing speculated in 1950 about the possibility of a machine that could fool an interlocutor into believing that they were talking to another person.

But the imminent arrival on our roads of self-driving cars (see the article 鈥淗ow do we decide what is right? The ethicist鈥檚 view鈥, below) has brought home to many people that the kinds of artificially intelligent machines long imagined by science fiction writers and visionary scientists are finally being realised.

But what does the AI revolution mean for universities? To find out, 糖心Vlog has teamed up with Microsoft to conduct a major survey of more than 100 AI experts and university leaders.

The findings include:

鈥 Only a minority of universities currently have an AI strategy, but most plan to develop one

糖心Vlog

ADVERTISEMENT

鈥 Universities find it difficult to recruit and retain staff able to teach and research in AI

鈥 AI will increase employers鈥 demand for university graduates and will not lead to university closures

糖心Vlog

ADVERTISEMENT

鈥 AI will be able to assess students, provide feedback and generate and test scientific hypotheses at least as well as humans can

鈥 But universities will not cut teaching, research or administration staff and may even recruit more.

Robot woman
厂辞耻谤肠别:听
Getty

Private corporations are in a desperate race to put affordable AI machines on the market, and politicians are doing all they can to facilitate that, anxious for the enormous tax revenue that national success in this area is expected to yield 鈥 not to mention the military superiority.

Last year, for instance, Darpa, the US government鈥檚 Defense Advanced Research Projects Agency, to develop next-generation AI systems capable of 鈥渃ontextual reasoning鈥. China, the US鈥 great geopolitical rival, is also making , as is Europe. The UK (拢300 million of it public money) in AI as part of its industrial strategy, which will include 1,000 new PhD places for those working on AI and related subjects. France and Germany are also investing in excess of 拢1 billion each.

And universities themselves are independently seizing the opportunities to get ahead; last year, the Massachusetts Institute of Technology a $1 billion commitment to establish a new college of computing, focusing on AI.

Yet there is widespread anxiety about the socio-economic consequences that this so-called fourth industrial revolution might have. The mushrooming volume of ink spilled in recent years on the topic is usually predicated on fears that many jobs 鈥 including some currently done by graduates 鈥 will be taken over by machines, potentially leading to mass unemployment. For instance, writing in the听听Future Frontiers: Education for an AI world, Richard Watson, a futurist and visiting researcher at Imperial College London, questions the role of higher education in its current guise if 鈥渁dvanced machine learning and autonomous systems are capable of doing almost everything humans can do at a fraction of the cost鈥. He worries that universities are 鈥渢eaching the next generation to become rapidly redundant in the face of accelerating technological change鈥.

Others argue that AI will create as many jobs for humans as it eliminates, but unease persists in those likely to be most affected by the changes. Of the 409 students who responded to conducted by researchers at London鈥檚 Hult International Business School, only 31 per cent feel hopeful about the prospect of living and working with AI and automation, while 18 per cent feel mainly fear. Only 20 per cent feel confident and very prepared for what is to come.

鈥淚t was clear from the findings that universities need to do more to discuss this topic and also relieve [students鈥橾 feelings of uncertainty,鈥 says Carina Paine Schofield, senior research fellow at Hult and co-author of the study. 鈥淸They] are the first generation for whom automation will definitely impact their working lives, yet their education system is only just beginning to wake up to the consequences of automation.鈥

糖心Vlog

ADVERTISEMENT

It is with such warnings in mind that the THE-Microsoft survey was launched. What do those best positioned to give an informed view believe the consequences of the fourth industrial revolution will be for higher education 鈥 and how are universities readying themselves to respond to those changes? If AI significantly reduces the demand for human labour, will it also diminish the demand for a university education 鈥 or perhaps increase it, as desperate jobseekers bolster their CVs with ever more qualifications? And even if it does, will that translate into more jobs for academics 鈥 or will teaching and even research largely be taken over by intelligent machines, too?

Man wearing robot-message t-shirt
厂辞耻谤肠别:听
Getty

The uncertainty surrounding the socio-economic effects of AI are reflected in the fact that just 31 per cent of the 111 respondents agree that national policymakers understand the social consequences that the AI technology they are funding and facilitating is likely to have over the next 10-15 years, compared with 52 per cent who disagree.

Yet, at the same time, respondents appear remarkably confident that universities and academics will remain relevant. Nearly all agree that AI will be a very big issue for higher education. And while only 41 per cent of the respondents 鈥 80 per cent of whom are computer science academics 鈥 say that their institutions have specific AI strategies, most of those who don鈥檛 are acutely aware of the omission, and most of the university leaders among the respondents express an intention to develop strategies where they do not already have one.

Meanwhile, although only 43 per cent of respondents say that their institution has allocated internal budget for AI-related institutional projects, 78 per cent believe that their university has the right skills internally to work on such projects, and nearly three-quarters of the 15 university leaders and seven chief technology officers in the survey have drawn on internal faculty expertise in AI to plan their institutional futures.

Regarding that planning, the name of the game seems to be to prepare for ongoing expansion, rather than agonising over how to manage decline. Some 94 per cent of respondents 鈥 and all the university leaders 鈥 believe that AI will increase the demand from employers for university graduates, while only 2 per cent expect it to drop.

Accordingly, 86 per cent of respondents disagree 鈥 most of them strongly 鈥 with the suggestion that AI will lead to university closures, and 94 per cent disagree that it threatens their own universities鈥 futures. Contrariwise, 95 per cent see it as an opportunity.

That does not mean that work does not need to be done to realise that opportunity. Only 24 per cent of respondents agree that their university is optimally configured physically for the age of AI, compared with 35 per cent who disagree. And many see AI leading to a shake-up in the administrative roles that universities will need to cover; as well as IT, student services and admissions are expected to see the biggest changes.

Regarding student admissions, Alice Gast, president and vice-chancellor of Imperial College London, told THE鈥檚 Asia Universities Summit last year that universities will use AI to select the best candidates for degree courses, noting that Unilever is already using AI and social media to screen candidates for internships and graduate jobs.

Some respondents welcome the prospect of fewer administrators. Olena Kaikova, a senior researcher in computer science from the University of Jyv盲skyl盲 in Finland, put it this way: 鈥淲ho would want to do a boring routine job if it can be delegated to AI robots?鈥

Those whose mortgages depend on such jobs may beg to differ, of course. But perhaps they ought not to worry too much. More than half of THE鈥檚 respondents (56 per cent) 鈥 and just under half of university leaders (46 per cent) expect AI either to increase universities鈥 need for administrative staff or to have no effect on it over the next 10 to 15 years. Of those who expect it to lead to job cuts, the vast majority predict that those cuts will account for less than a quarter of current jobs.

THE-Microsoft survey on AI

For full survey results click here

One group of people whom universities are desperate to recruit is the computer science experts. One approach is to train them in-house. For instance, the Korea Advanced Institute of Science and Technology (KAIST) has just set up a new Graduate School for AI, aimed at turning 60 students a year into what KAIST president Sung-Chul Shin calls 鈥渢op-tier AI engineers鈥.

Shin鈥檚 ambition is to make the school one of the 鈥渢op five AI schools in the world鈥, in terms of number of publications in the field, by 2025. It currently ranks 10th in the Computer Science Rankings run by Emery Berger, a computer science professor at the University of Massachusetts Amherst, but Shin expects that with the help of an allocated budget of 22 billion KRW (拢15 million), on top of 23 billion KRW in external grants, the school will 鈥渂reak new ground鈥.

In realising this ambition, it will no doubt help that, according to Shin, KAIST鈥檚 current AI researchers are already 鈥渢he cream of the crop鈥. But not every institution can say the same 鈥 and none can be overly confident of holding on to what they have, given the huge salaries on offer in the tech industry.

According to Karin Immergluck, executive director of Stanford University鈥檚 technology licensing office, losing existing staff to industry is 鈥渄efinitely becoming more of a problem in Silicon Valley 鈥 not just for Stanford but for the University of California, San Francisco and Berkeley as well鈥. But, regardless of their proximity to Silicon Valley, not one of THE鈥檚 respondents finds it easy to recruit and retain academic staff able to teach and research AI, and most find it 鈥渄ifficult鈥 (48 per cent) or 鈥渧ery difficult鈥 (41 per cent).

Frederik Heintz, a senior lecturer in computer science at Link枚ping University in Sweden, plumps for the latter option, explaining that 鈥渦niversities cannot compete in salary and other compensations with the private sector. Too much administrative overhead is another major issue.鈥

An Australian university leader, who asked not to be identified, agrees that 鈥渢he uncertain, less-well-paid life of an academic鈥 often compares poorly with a career in industry.

But Immergluck feels quite relaxed about the situation, depicting the migration of academics into industry as 鈥渏ust another form of tech transfer. The general public and industry are benefiting from the knowledge that a professor has gained over years of doing research at a university. Of course, no one likes losing their star faculty but it just is [a fact]. It鈥檚 a part of being in that kind of very interactive environment where universities and industry are collaborating very closely.鈥

Several respondents also highlight the fact that the brain drain has the virtue of facilitating academic collaboration with the tech world, which can be mutually beneficial. Moreover, the direction of travel is not all one way. According to Immergluck, US academics often return from a spell of 鈥渢hree or four years鈥 in industry. And while they are away, 鈥渢heir previous university is the first one that they are going to think of when they want to form collaborations鈥.

Technician working on robot
厂辞耻谤肠别:听
Getty

Still, the pull of industry is such that although our respondents rank research as the area of university management and practice likely to be most affected by AI, they are less sure that the biggest AI research breakthroughs will occur in universities: 38 per cent believe that they will, but only 7 per cent strongly believe that, while 23 per cent disagree (the rest are unsure).

Jyv盲skyl盲鈥檚 Kaikova explains that, in her view, 鈥渦niversities do not have enough resources for the breakthroughs鈥. But it isn鈥檛 just financial resources that universities lack. Speaking at THE鈥檚 Research Excellence Summit: Asia-Pacific in Sydney earlier this year, Pascale Fung, professor of computer science and engineering at the Hong Kong University of Science and Technology, warned that one of the biggest challenges facing universities is access to the huge amounts of data needed to develop AI systems.

鈥淯niversities today cannot compete against the Googles of the world because they do not have that data. So we are actually facing the challenge of not having equal access to the raw material of our research,鈥 she said.

The best way forward, she tells THE, would be for tech companies to share some of their data with universities in an 鈥渁nonymised and randomised鈥 way, so as to comply with data protection laws. Universities could also focus their efforts on 鈥渕ore specialised topics within the relevant research areas鈥. This would allow them to 鈥渕aximise the impact of their research without being marginalised鈥, she explains 鈥 although it is 鈥渢ricky to do and requires insight and vision鈥.

Robots on display
厂辞耻谤肠别:听
Getty

Many survey respondents suggest that the question of which sector will produce the biggest breakthroughs is not conducive to an either/or answer. 鈥淔undamental research will still be done in universities, where constraints are more relaxed than in industry,鈥 predicts Eduardo Alonso, a reader in computing at City, University of London. 鈥淥n the other hand, natural competition will bring significant applied breakthroughs developed in companies,鈥 he said.

Link枚ping鈥檚 Heintz agrees that universities will tend to focus on basic research, 鈥渨hich means that their breakthroughs will be significantly delayed compared to the applied research done by companies鈥. Hence, 鈥渢he public perception will probably be that industry is doing most of the research, when, in fact, [it is] piggy-backing on what the universities have done for centuries鈥.

For Fung, it is 鈥渋mperative鈥 for universities to be more creative in their employment practices to allow their academics to hold part-time positions in industry. She says this is already happening in the US, but she fears that universities in other countries might struggle with public perceptions: 鈥淚n Hong Kong, for example, our universities are publicly funded, so it is difficult to justify [giving someone] a full-time professor role while allowing that professor to also be part-time in industry. But these are the challenges we are facing, so some kind of innovative thinking needs to happen.鈥

Joint positions with industry could also allow universities to tap into tech firms鈥 enormous research budgets; 76 per cent of respondents believe that funding agencies are not currently investing enough in AI research. There is also widespread concern that not enough funding is going into researching the philosophical ethical aspects of AI (see box opposite). Asked whether they agree that AI researchers are sufficiently aware of the ethical implications of their work, only 36 per cent of our respondents agree, while 41 per cent disagree.

鈥淭here is a lot of work going on in AI by way of tech 鈥 boys鈥 toys and home robotics, creepy gadgets and so on 鈥 but what a lot of people are trying to look into is the ethical tone of it all,鈥 says Sandra Leaton Gray, professor in education at UCL Institute of Education. 鈥淯nfortunately that鈥檚 a minority sport: it鈥檚 really difficult to do any humanities- and social sciences-based work on it because grants are not tailored towards it.鈥

Leaton Gray is part of a new specialist interest group set up within the British Educational Research Association to redress the dearth of AI research in the discipline. 鈥淎mazing projects have not been funded because they are difficult to review,鈥 she says. 鈥淗ow do you go about reviewing a proposal for something that nobody really understands yet? So many of the AI and education funding proposals are arbitrarily rejected by confused reviewers with little expertise in what is quite a new field. It is imperative that we get this right, first by more enlightened grants for social science-related AI research, not just more money for the tech promising to bring in more money.鈥

Man and robot
厂辞耻谤肠别:听
Getty

What about AI鈥檚 impact on the way research is conducted? To what extent could AI actually take over the research process itself? Could there ever be an AI version of Alan Turing (who was the 鈥済reatest person of the 20th century鈥 by BBC viewers, largely for his groundbreaking use of a proto-computer to break Nazi codes during the Second World War)?

糖心Vlog

ADVERTISEMENT

Almost all respondents expect AI to have a significant or very significant effect on the way that research is conducted. Indeed, this is already happening to some extent. For instance, Lee Cronin, Regius chair of chemistry at the University of Glasgow, has been using AI bots since 2010, most recently to mix chemicals methodically and at random in the hope of discovering beneficial new reactions.

Respondents are confident that this is just the beginning. Most agree that AI will have the cognitive capacity to participate in scientific advancement, at least to some extent. Exactly half believe that AI will be able to direct the testing of scientific hypotheses at least as well as humans can, and 52 per cent think machines will be able to generate new scientific hypotheses as well as humans can. Respondents are less sure of whether AI will be able to generate new theories, concepts or insights in non-scientific disciplines, but 26 per cent believe that they will be able to.

Cronin himself, though, is more sceptical, remarking that his bots 鈥渉ave discovered nothing on their own, since they all have a human overlord鈥. For this reason, he strongly disputes the suggestion that the involvement of silicon brains will reduce the need for carbon-based researchers. 鈥淢y robots are going to make boring stuff obsolete so we can focus on being creative,鈥 he says.

And whatever their views on the potential of AI, most of our survey respondents agree that it will only complement rather than replace human scientific input; as Heintz puts it: 鈥淗umans and AI [working] together is鈥uch more powerful than either one or the other.鈥 The vast majority disagree with the suggestion that AI developments over the next 20 years will result in decreasing demand for humans in the lab. That view holds even for research assistants, who typically carry out the more routine tasks: just 20 per cent of respondents expect demand for them to drop, compared with 72 per cent who do not. Of the latter, 46 per cent strongly disagree with the suggestion.

THE-Microsoft survey on AI - second graph

Teaching staff also have little to fear from AI, our respondents predict. Nearly half (45 per cent) believe that AI will not result in any teaching staff being made redundant over the next 10 to 15 years. Meanwhile, 25 per cent expect their institutions to take on more teaching staff, with many predicting that the rise of AI will increase the demand for education from humans seeking to remain employable. Only 7 per cent of respondents think that AI will lead to more than a quarter of teaching jobs being lost, and just 1 per cent expect more than half to go.

Asked how great the impact of AI will be on curricula and pedagogy, most respondents say that it will be 鈥渟ignificant鈥 (56 per cent) or 鈥渧ery significant鈥 (33 per cent). Respondents are reasonably confident that AI will be able to provide student feedback at least as well as humans can, with student assessment another area where AI could play a big role. But they are less confident that an AI teaching assistant could run a tutorial or, especially, give a lecture: just 15 per cent of respondents believe AI could rival a human at that task, compared with 64 per cent who disagree.

The key reason cited is that learning is stimulated by a human presence. According to Heintz, 鈥渁ll aspects of teaching and learning can be improved by AI-technology, but learning to a large degree is a social process, where doing it together with other people is important鈥. A computer scientist from the Republic of Ireland agrees: 鈥淎 human knows what it鈥檚 like for a human to learn, and this will be hard to replicate for AI. Some students will always benefit from a human 鈥榦verseer鈥 providing motivations/deadlines, and some will feel that they need human contact.鈥

But what students study may well change. As one of the students who participated in the Hult survey put it, 鈥淪tudents across the world will have to face the possibility that perhaps what they are dedicating their lives to studying right now鈥ay soon become redundant.鈥

Unsurprisingly, computer science is the discipline whose graduates are most frequently predicted to see growing employer demand, followed by engineering, medicine and business. But making such predictions is a very imprecise art, as underlined by the fact that business is also among the disciplines forecast to be most likely to see a decline in demand for its graduates, behind languages but ahead of law.

Meanwhile, respondents are keen on the idea that not only science students but also humanities and social science students will need to be taught specific technical skills to help them programme and interact with artificial intelligence productively: 41 per cent of respondents believe that more than threequarters of the latter will need such training. But what is interesting in the Hult survey responses is the active desire among students for more courses in subjects like ethics and philosophy. There was also a sense that universities should focus on skills and subject areas where AI is less likely to have an advantage: those that require aptitudes such as complex decision making, critical thinking, gut instinct, entrepreneurship and emotional intelligence. For this reason, many observers predict that liberal arts degrees will be as much in demand as computer science courses.

In terms of teaching staff鈥檚 specific duties, Leif Azzorpardi, Strathclyde chancellor鈥檚 fellow in computer and information sciences at the University of Strathclyde, says his institution will potentially take on more people 鈥渢o deliver better services鈥 to students in collaboration with AI. 鈥淭eaching staff鈥檚 duties will certainly change from mundane tasks such as marking to engaging more with students to create unique learning experiences,鈥 he says. However, 鈥渙f course, institutes that do not embrace AI will not be as competitive, and will have to make redundancies鈥.

Yellow robot
厂辞耻谤肠别:听
Getty

It is, of course, important to remember that university teaching and learning is not just about preparing people for the jobs market. In a separate chapter of the Future Frontiers book, Toby Walsh, Scientia professor of artificial intelligence at UNSW Sydney, stresses that 鈥渨ith society under a period of significant change, we will also need an informed population to navigate this future, and to demand appropriate checks and safeguards. A citizenship educated in ethics, society and civics is therefore essential.鈥

And most observers agree that the frequency with which people access higher education will increase: 鈥淛ust as the [first] industrial revolution made it essential that universal education was provided to the young, the AI revolution will make it essential that education is provided to people at every age of their lives,鈥 Walsh tells THE , allowing people to keep their skills up to date.

But he denies that this amounts to a call for wholesale change. 鈥淎I won鈥檛 change the ultimate mission of universities 鈥 educating people to the frontiers of our knowledge and undertaking research to expand that frontier 鈥 but it will change how that mission is delivered,鈥 he says. 鈥淎I can help flip the classroom, personalise education and tackle the increasingly and distressingly prohibitive cost of delivering that education. Some of the skills that universities help people learn will change. But the skills that will be most in demand will tend to be old-fashioned ones, that universities used to deliver, such as analytical and creative thinking.鈥

This may be particularly true in the West, he predicts, where universities may see their niche in terms of 鈥渟oft skills and higher ethical standards鈥 鈥 while the likes of South Korea and China, with their bigger research budgets, plough a more purely technological furrow.

Glasgow鈥檚 Cronin also cautions university leaders against getting carried away by what he sees as the largely unjustified hype surrounding AI. 鈥淭he key problem, as ever, is that a small pool of academics have managed to push politicians to think that investing in AI research is going to change the world. I don鈥檛 think that is right,鈥 he says.

Universities remain 鈥渢he cradle of innovation and invention鈥, he says. 鈥淎I machine learning can never replace that until you make a totally new, self-replicating machine or life form with artificial consciousness鈥nd that will remain firmly in the realm of science fiction for many hundreds of years.鈥

Help with distribution of this survey was provided by the Confederation of Laboratories for Artificial Intelligence Research in Europe.

The results of the THE-Microsoft Artificial Intelligence Survey will be discussed at THE鈥檚 , held at Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea听from 2-4 April 2019.


How do we decide what is right? The ethicist鈥檚 view

Image data

For all the intellectual achievements of the past century, many concede that there has been little progress in solving philosophical problems.

There is no broad agreement, for example, about whether free will exists, whether the mind is more than the sum of its parts, or even whether a runaway tram should be diverted from hitting five people at the price of hitting one: the famous 鈥渢rolley problem鈥 first posed by Oxford philosopher Philippa Foot in the 1960s.

Perhaps that explains why only two or three philosophy papers are among the 30 citations in a recent Nature , 鈥淭he Moral Machine experiment鈥, on the actions that self-driving cars should take in the event of a dilemma resembling the trolley problem.

This is hugely important because we are rapidly entering an era in which artificial intelligence algorithms will determine who lives and who dies, not only in car accidents but also in healthcare and drone warfare. We urgently need a manual of machine ethics 鈥 but no one is quite sure how to devise one, or who should be involved.

The Nature paper assumes 鈥 drawing on an interview with former US president Barack Obama 鈥 that consensus is a critical criterion for determining a 鈥渃orrect鈥 set of ethical principles for self-driving cars. But what the paper reveals is that 鈥渨e鈥 seem to agree on little other than sparing people over animals, more over fewer people and the young over the old. Drawing on survey results from 2.3 million people, it shows that there are significant differences between the intuitions of different geographical groups: 鈥淲estern鈥 people have a preference for sparing the fittest; 鈥淓astern鈥 people prefer to spare the law-abiding (bad news for jaywalkers); while 鈥淪outhern鈥 people (Latin Americans, among others) are inclined to spare women and those of higher status.

This shows how difficult the task of programming ethical rules into machines will be. But, to a philosopher, there is nothing revelatory about the idea that people from different cultures have different views about what is right or fair. The interesting (and, not surprisingly, unanswered) question is whether ethical preferences come from objective principles or from culture 鈥 or, rather, the extent to which culture determines individuals鈥 perception of moral principles. Yet while these questions are critical to the authors鈥 claims, they are barely discussed in the paper.

This points to the increasing disconnect between the cultures of philosophy and technology 鈥 particularly among those involved in designing machine-learning algorithms. Few people in industry care what philosophers have to say. We can talk about what truth is or is not, and political disinformation will continue. We can talk endlessly about what makes human intelligence unique, and the media will continue to claim that programmers have finally developed a machine able to think in a way that actually resembles human intelligence. And we can say over and over again that there isn鈥檛 really a good answer to the trolley problem, but self-driving cars will appear on the roads regardless.

Industry no doubt sees enormous surveys probing moral consensus, such as the one in the Nature paper, as the key to programming driverless cars. But what if consensus isn鈥檛 the right way to go at all? What if machinery should be programmed to strictly adhere to, say, utilitarian principles?

It may be that a good start to a solution lies in education. A slightly less recent Nature suggests that the 鈥減hilosophy鈥 part of the doctorate of philosophy should be beefed up. The article was written by the director of the promising initiative at Johns Hopkins University, which aims to promote 鈥渞igour, responsibility and reproducibility鈥 in scientific practice. Interdisciplinary understanding, with a particular focus on philosophy, may help to improve these three Rs in so far as if researchers are trained to question the foundations of the scientific method, their scientific reasoning is likely to be more robust.

Moreover, their sensitivity to the pressing ethical questions that emerging technologies pose will be much more acute 鈥 even if the answers remain difficult to determine.

Jonathan R. Goodman is a fellow of the Institute of Global Health Innovation at Imperial College London, and a doctoral student at the University of Cambridge鈥檚 Centre for Human Evolutionary Studies.


鈥楽wimming in too much information鈥: The naysayer鈥檚 view

Man holding sign
厂辞耻谤肠别:听
Getty

The 鈥渢ech tsunami鈥 that is already engulfing lower-skilled jobs has not triggered a mass ascent to the higher ground of advanced education, according to social theorist Anthony Elliott. And the idea that continuous retraining could help people ride the wave of technological disruption is 鈥渨ildly optimistic鈥, he adds.

In his new book , Elliott 鈥 dean of external engagement, professor of sociology and executive director of the Hawke EU Jean Monnet Centre of Excellence and Network at the University of South Australia 鈥 argues that while apocalyptic fears of a future dominated by cyborgs and killer robots are missing the point, the prospect of mass unemployment is very real.

According to Elliott, the AI revolution is already upon us. It is acting alongside associated trends 鈥 including accelerated automation, big data, 3D printing, cloud computing, Industry 4.0 and the 鈥渋nternet of everything鈥 鈥 to reshape everyday life in pervasive but often 鈥渕undane鈥 ways.

Claims that education can keep pace with all this change are ill-conceived, he writes. 鈥淭he automation of many lower-skilled jobs has not necessarily produced more opportunities for advancing education levels or retraining, and recent evidence indicates that the idea of continuous retraining is optimistic at best.鈥

In support of that view he cites the fact that unemployment in European OECD countries rose from 2.6 per cent in 1970 to nearly 11 per cent in the mid-1990s despite continuous efforts to retrain those affected by the introduction of first-generation robots on production lines and elsewhere. He also that there has been a decline in demand for skilled workers since 2000 even as the supply has increased, resulting in more highly skilled workers replacing lesser skilled workers lower down the occupational ladder, worsening the plight of the unskilled.

Analyses in the UK, US, Japan and Australia have all concluded that 40 to 50 per cent of jobs will disappear within the next 15 to 30 years, Elliott tells 糖心Vlog: 鈥淚f those figures are only half right, trying to reskill people to keep up with that level of change ain鈥檛 going to cut it,鈥 he says.

Lifelong learning has value 鈥渋n and of itself鈥, Elliott concedes, because 鈥渋t鈥檚 an individual, social and public good to have an informed citizenry鈥. But it should not be viewed as 鈥渁n insurance policy against this tech tsunami鈥.

His book criticises the notion that workers dislodged from largely routine and predictable forms of employment can reinvent themselves by acquiring digital skills in 鈥渁 kind of relentless self-fashioning鈥o update talents for the jobs of the moment. The truth, at least for millions of average workers around the globe, is that technology often results in significant deskilling.鈥

For Elliott, the dogma of continuous retraining 鈥渟macks of a particularly Western individualist orientation 鈥 faster, quicker, leaner, more self-actualising. It鈥檚 very much a privatisation of a public problem [where] individuals lift themselves up by their own bootstraps and get on with the work of being more economically productive.鈥 But, in his view, people need help in navigating the brave new digital world at a more basic level.

糖心Vlog

ADVERTISEMENT

鈥淲e鈥檝e entered an age of big data: we鈥檙e swimming in too much information,鈥 he says. 鈥淚t鈥檚 people that have to do the work to integrate all this data into their lives, their work structures, the way they do things. That鈥檚 where we need the public discussion about AI.鈥

John Ross

POSTSCRIPT:

Print headline:听HAL, et al

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (1)

I can't find info about the survey (sample, response rate, etc). Can you please share more about that? Without it, the data is not really useful. Thanks!

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT