Humanidades y STEM
Septiembre 3, 2019

captura-de-pantalla-2019-09-01-a-las-16-52-55Long Reads

Mind Over STEM

 

With universities around the world cutting liberal-arts programs and even eliminating entire majors such as history, there is every reason to worry about the fate of the humanities. In an era of deepening technological determinism, we are going to need these disciplines now more than ever.

WELLINGTON – Among the expected casualties of the digital revolution is the study of the humanities. Although a focus on human experience in all its diversity has long been a core mission of universities, the liberal arts are increasingly being dismissed as irrelevant to the digital future, or at least to the digital economy. Who would want to invest their time and money studying subjects that are unlikely to “pay off,” either for students or university budgets?

This reasoning, and the anxieties it has provoked, has led to a widespread flight from the humanities, not least in my home country, New Zealand, where higher education suffers from chronic underinvestment from the state. Under the previous government, the ruling National Party placed a big bet on STEM (science, technology, engineering, mathematics) with the goal of boosting economic growth and preparing graduates for work in the digital economy. The Universities of Auckland and Otago followed suit, downsizing their humanities offerings. Similar trends are apparent in the United States, where some cash-strapped colleges and universities have stopped offering majors such as history altogether.

The problem for countries like New Zealand is that they lack the Princetons and Harvards with the wealth to protect the humanities against arguments born of a cold but practical market-driven logic. We are unlikely to enjoy the largesse recently bestowed on University of Oxford ethicists and social scientists by billionaire Stephen Schwarzman of Blackstone.

But this makes such countries an ideal environment for judging whether the humanities can be made to stand on their own. The test subject is my own institution, Victoria University of Wellington, which has decided to double down on the humanities. For that bet to pay off, we will have to make the case for why the study of history, literature, philosophy, and the social sciences will remain essential in an age of iPhones, Twitter, and artificial intelligence (AI). We will have to make the humanities stick for the post-millennial generation.

THE MESSAGE IS THE MEDIUM

The old way of selling the humanities no longer works. The humane disciplines speak directly to our innate curiosity about the infinite variability of human experience, and it is that promise of gaining a deeper understanding of it that attracted me in the 1980s. But I didn’t have a smartphone and instant, round-the-clock access to Wikipedia, where one can acquire a superficial understanding of any topic in a matter of minutes.

Students today suspect that they can have it both ways. Rather than committing the bulk of their resources to a Religious Studies major with a focus on Mayan spirituality, they can pursue a marketable STEM discipline and indulge other intellectual fancies on the side. They would gain a much deeper understanding by actually studying the topic, of course, but that seems like a luxury that few can afford.

A fast-changing world demands a more robust defense of the humanities. Rather than tapping into students’ curiosity, which the Internet now does in spades, proponents of the humane disciplines must show that an understanding – not just superficial knowledge – of the lives of humans in different places and at different times is necessary for confronting the future. Those with a simplistic or incomplete view of the human condition – and particularly of humankind’s past experiences of technological change – are the last people we should be listening to when it comes to the digital disruption.

Consider that now-ubiquitous topic, “the future of work,” and the ongoing debate about whether human labor will even be needed in an age of AI and full-scale automation. STEM disciplines offer no useful answers to such questions, for the same reason that a comprehensive anatomical understanding of the brain does not tell us what we need to know about the mind. To maintain a society in which people have the opportunity and wherewithal to pursue meaning and fulfillment in their lives, we will need the humanities.

THE STARRY-EYED SCIENCE

Economics is known as the “dismal science” because of its pessimistic assumptions about human nature. Yet economic orthodoxy tends to be rather upbeat about the future of work. Most economists argue that the current disruption is nothing new. Every period of significant technological progress has caused anxiety about labor displacement, yet work has always survived.

In a 1930 essay, John Maynard Keynes provided a synopsis of how “technological unemployment” works. The Industrial Revolution brought the power loom, and handloom weavers found themselves out of a job. But the increased productivity from the new technology generated economic growth, and thus more money with which to employ the weavers’ children in even better jobs. We now look back on handloom weaving as a form of drudgery. Technological unemployment, economists tell us, is a lot like adolescence: an awkward transitional state that we are happy to have endured in order to reach adulthood.

Nowadays, some leading economics commentators have repurposed Keynes’s message for the digital revolution. The widely read journalist James Surowiecki offers sympathy for the victims of digital technological unemployment. But he also thinks that those most worried about automation should “chill.” Just as a pilot should trust her instruments when she becomes disoriented, so we should trust the tools of economics to see us through periods of churn and disorientation. We should have confidence that there will always be work for humans to do.

Surowiecki reminds us that in the 1970s, the ATM seemed to sound the death knell for many retail banking jobs. But while some bank tellers did lose their jobs, overall employment in the sector actually increased, because banks were able to open even more branches. Moreover, the new jobs in personal banking turned out to be much more stimulating than simply counting out bank notes and depositing checks. The new technology had generated growth, and with it demand for people to perform new tasks.

As MIT’s David Autor notes, we tend to focus so much on the jobs that are being destroyed that we fail to imagine the jobs that will be created. Historically, the new previously unimaginable jobs paid more and required more training and knowledge. The people patching computer networks today earn far more than the people who once fixed power looms, so it stands to reason that the jobs of the AI and automation age will follow the same logic.

Even in cases where automation is expected to have a far-reaching impact, economists remain consummately reassuring. In Machine, Platform, Crowd: Harnessing Our Digital FutureAndrew McAfee and Erik Brynjolfsson of MIT present medical diagnosis as a task that is ideally suited to the pattern-recognition powers of AI. “If the world’s best diagnostician in most specialties – radiology, pathology, oncology, and so on – is not already digital,” they write, “it soon will be.” But, they hasten to add, this is not bad news for human doctors, because, “Most patients … don’t want to get their diagnosis from a machine.” Presumably, then, all those radiologists, pathologists, and oncologists will survive and prosper as long as they adopt the new technology.

In other words, as Surowiecki would say, medical professionals just need to “chill,” and then consider sparing a dime for the laggards who decided to study journalism instead of anatomy or robotics. But while it is nice to be reassured by some of our smartest, most informed economists, one can question the usefulness of this optimism. As in the case of climate change, sometimes we need to hear more of the potential bad news from those who know best, rather than just feel-good platitudes.

Some economists seem to understand this. Diane Coyle of Cambridge University has issued a timely warning about the subordination of political and even moral decision-making to utility-maximizing algorithms. Similarly, Robert Skidelsky of Warwick University worries that AI in the hands of a wealthy few could usher in a new form of serfdom. And Allianz’s Mohamed A. El-Erianpoints out that the economics discipline has become too narrow-minded to grapple with the full implications of the current stage of development. (The same, it should be said, applies to academic philosophy.)

THE REAL LUDDITES

Because future-of-work analyses rely on inductive arguments based on evidence from the past, if one has not gotten the past right, one’s prognostications about the future have little use. The standard historical account of technological disruption usually starts with the Luddites, which the Oxford English Dictionary defines as “derogatory: A person opposed to new technology or ways of working.” In 1779, a weaver named Ned Ludd supposedly destroyed two knitting frames, forever becoming a symbol of revolt against new technologies. Today, the Luddites are usually depicted as if they were hooligans in a Batman movie, suddenly showing up to smash stuff, but always losing in the end. (They, or at least their kids, eventually get with the program.)

But as the twentieth-century labor historian E.P. Thompson showed in his seminal study The Making of the English Working Class, those whom we now know as the Luddites weren’t actually traumatized technophobes. They were protesting the raw deal they got following the adoption of industrial-era inventions. As an historical analogy, one could describe Amazon workers who have protested at the company’s “fulfillment centers” as modern-day Luddites. If they were to smash one of Amazon’s signature “Kiva” warehouse robots, it wouldn’t be because they want to banish it from existence. On the contrary, they are probably happy to have the robot do most of the heavy lifting.

The reason today’s Amazon “Luddites” feel unfulfilled at the fulfilment centers is that they are working for one of the wealthiest companies in the world and making a pittance. Here, inductive reasoning adds shape to another historical pattern: when technological breakthroughs start to yield real returns, the gains are usually hoarded by a fortunate few. As the journalist Sarah Kessler’s book Gigged: The Gig Economy, the End of the Job and the Future of Work demonstrates, the “haves” of the digital economy have mastered the art of using “gig work” to avoid sharing the rewards of innovation with the have-nots.

The economists are right about one thing, then. Automation doesn’t destroy jobs in the way that the Death Star destroys planets in the Star Wars films. Humans are clever and flexible, and have proven capable of working for less and tolerating worse conditions than one might expect (economists refer to this earnings floor as the “reservation wage”). In fact, that flexibility seems to be one advantage of human workers in the digital age. Automation technologies aren’t capable of simply varying their own costs of installation or operation to meet the demand for greater profits. This is not to suggest that humans can work for nothing, but history has shown that they can be made to work for very little indeed.

To most economists and technologists, modern labor-market conditions and the displacement of workers are simply the price of “progress.” When humans are replaced by machines in one area, they will seek new forms of value creation. And once a human worker demonstrates the economic value of some new activity, she creates an incentive for someone to design a machine that can perform the same function more efficiently, and at lower cost.

This process is especially evident nowadays, because the sheer range of applications for digital technologies is broader than that of, say, the steam engine. In 1821, Charles Babbage, a nineteenth-century pioneer of what would become the computer age, lamented that calculations performed by hand – many of them inaccurate – had not been “executed by steam.” In the event, hand calculators’ jobs remained safe until the digital revolution, and even the out-of-work handloom weavers found jobs as domestic servants and merchant sailors – roles that could never be taken over by water- or steam-powered machines.

ANATOMICAL TURKS

The situation today is different. Advances in AI and machine learning are designed to automate tasks performed by the human mind, making it much harder to identify any job that isn’t vulnerable to technological disruption. Like their blue-collar counterparts, many white-collar workers may have no choice but to work for less amid deteriorating working conditions. And this dynamic is especially strong in the gig economy: the more Uber drivers protest about their pay and working conditions, the greater the incentive to automate them out of existence.

What about those doctors and radiologists whom McAfee and Brynjolfsson tell us will be fine? As it happens, they, too, might join the gig economy. One reason why radiologists, pathologists, and oncologists are so well paid is that they have undergone many years of education. Patients receiving a cancer diagnosis want – and will seek out – the most knowledgeable oncologist.

But what happens when people start believing that diagnostic AI is better informed than even the best-trained human doctor? In The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, computer scientist Pedro Domingos hypothesizes that a “CanceRx” machine-learning algorithm of this caliber will soon be possible. Some will counter that even CanceRx would need to have its diagnoses affirmed by a knowledgeable human oncologist. But would it?

Consider the lessons of automation in aviation. As the journalist Nicholas Carr points out, airplanes sometimes crash precisely because human pilots seek to correct what they mistakenly perceive to be autopilot errors. There have been heroic human interventions, to be sure. No autopilot could have safely landed a passenger airliner in the Hudson River, as Chesley “Sully” Sullenberger did in 2009. Yet, according to Carr, autopilot technology is quickly improving at the same time that pilots are being deskilled. It is easy to imagine a day when human pilots’ sole job is to soothe passengers with inflight announcements while an AI flies the plane.

If this is where medicine is heading, patients are not going to want rusty human oncologists overriding diagnoses issued by better-informed machines. The job for humans working with CanceRx, then, will be to offer empathy to the patient. Nobody will want them busying their pretty human heads with the actual oncology. CanceRx will be mindless, and it will make mistakes on occasion. But those occasions will likely be dwarfed by the frequency of misdiagnoses made by human doctors.

Needless to say, once an AI has taken over the cognitive functions of medicine, the leftover tasks – providing a human touch – will command less prestige and remuneration. Future oncologists’ bedside manner will be useful for rounding out some of the rough edges of super-efficient digital technologies. But it won’t be worth much in economic terms. To adapt Karl Marx’s famous saying about a future in which people “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner,” in the gig economy, one can imagine people logging hours on TaskRabbit in the morning, comforting patients about their leukemia diagnoses in the afternoon, and driving for Uber in the evening.

BACK TO THE HUMAN CONDITION

So, which is correct: the dismal forecast I’ve just offered, or the more optimistic one from economists telling us to “chill”? In truth, I don’t know. One should never be too confident about any vision of the future. What matters is how we go about confronting uncertainty. Sometimes it’s nice to hear that we should just relax. But sometimes we should be prompted to think.

I am fairly confident that my house won’t burn down. But that outcome would be so disastrous that I am willing to take out insurance against it. How much would it cost to insure against a future of universal gig work? At this stage, it costs us nothing other than the imaginative labor required to think about the issue. If today’s university students find that kind of labor fun, I would suggest that they see more of what the humanities have to offer.

The humanities are essential in times of change. They teach us that human experience is subject to radical contingency, and that we should be wary of complacent and overly confident predictions about how people will adapt to the economy of the future. They also teach us to reject technological determinism. While the STEM fields can tell us what humans will be able to do in the future, they tell us nothing about why they should do it or, more grandly but no less germanely, what it will mean to be human.

Will we be passive recipients of new technologies, simply letting the tides of change carry us into the future? Or will we be active agents, demanding that new technologies and business models make some allowance for the human condition and the values we hold dear? If there is one thing of which we can be certain, it is that we will need people who know how to think outside the STEM box.

0 Comments

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This