A sign points the way to the Harvard College Admissions Visitors Center at Harvard University in Cambridge, Massachusetts, U.S., July 6, 2023. (REUTERS)AI 

Will Oxford and Cambridge Help Harvard Combat ChatGPT?

Artificial intelligence (AI) has the potential to not only disrupt but completely revolutionize higher education. The progress of intelligent machines in this field is already significant. AI has demonstrated its ability to effortlessly excel in standardized tests like the GMAT and GRE, which are essential for admission into graduate schools. Moreover, AI achieved an impressive 3.34 GPA in a Harvard freshman course and received a respectable B grade on the final exam of a typical core Wharton Business School MBA course.

What can be done to avoid a future where AI institutionalizes cheating and robs education of all real content? That question is sparking anxious debate in academia, not least in the United States, a country that has long been a pacesetter in higher education and technology but is losing confidence in its ability to combine equity with excellence. As the return to campus approaches, the Washington Post warns of “chaos” and “turmoil” for the fall. Another equally weighty one should be added to this discussion: What does the ease with which machines can perform many higher education tasks, just like humans, tell us about the shortcomings of the current education model?

One solution to the problem is to ban students from using artificial intelligence. Sciences Po in Paris and RV University in Bangalore follow this rigorous approach. But is it realistic to ban technology that is quickly becoming ubiquitous? And is it good preparation for life after college to prevent students from using a tool they will later rely on at work? Banners are in danger of making the same mistake as Socrates, who in Plato’s Phaedrus opposed writing things down on the grounds that it would impair memory and promote the manifestation of wisdom, not true wisdom.

A more realistic solution is to allow students to use AI, but only if they do so responsibly. Use it to gather information or organize notes or check spelling and facts. Avoid getting it to write essays or take tests. But this raises practical questions about where to draw the line. How do you know if students have only used it to organize notes (or check facts) rather than write essays? And are you really doing research if you get a bot to do all the work and then just grind the material into an essay?

The “use it responsibly” argument opens up the possibility of an academic future that is a cross between weaponization and a cat-and-mouse game. The arms race consists of tech companies developing increasingly sophisticated cheating apps and other tech companies developing even more sophisticated apps to cover up cheating. The cat-and-mouse game consists of professors trying to detect illegal use of AI and students trying to trick them.

Neither approach seems to work, especially when it comes to detecting a scam, let alone removing it. Open AI, the maker of ChatGPT, unveiled an app that was supposed to reveal AI-generated content this January, but quietly scrapped it due to its “low level of accuracy.” Another company, Turnitin.com, has found that bots often mark human writing as created by artificial intelligence. Texas A&M professor Jared Mumm used ChatGPT to check if his students had used the system to write their assignments. Botti claimed to be the author, and the professor held up his students’ diplomas until they submitted Google Docs timestamps showing they had actually written. It turns out that ChatGPT is overzealous in its authorship claims.

So, what can be done in education to prevent Armageddon? The best answer is not in fine-tuning machines – the solution to technology problems is rarely in additional technology, but in adopting a method of teaching that goes back to Plato and Socrates and has been developed at Oxford and Cambridge over the past 150 years: the curriculum. Call it the Oxbridge solution.

At Oxbridge, students meet once a week individually or in groups of two (or rarely three) with their tutors. The instructor sets them an essay question and gives them a reading list. Students do the required reading on their own, write their essays, and either send them to their instructors (the preferred method in email days) or else read them aloud (the own day method). Tutors then probe for weaknesses in the essays. What did you mean when you said that? What about X, Y or Z? Why didn’t you consider Professor Snodgrass’s views? (Or alternatively, if the student trusted Snodgrass too much, why didn’t you recognize that Snodgrass, although a dear colleague, is a baffling idiot?) The teaching partner is also obliged to engage in the discussion in the same spirit as testing hypotheses, exploring alternative explanations, or generally playing with ideas.

The spirit of the curriculum is both gladiatorial and egalitarian. Information is disputed. Discussion is essential. The authorities are supposed to be overthrown from the throne. Tutors are happy to grant arguments to their students if the students get better from them. “A good syllabus should be a sparring match,” not “a substitute for a lecture,” said Dacre Balsdon, of Exeter College, Oxford, 1927-69.

Students’ grades are determined by high-stakes tests that include writing essays quickly and under test conditions; these are then marked with the constituency of a foreign examiner appointed by the university (perhaps Snodgrass is among them). Tutors compete to get the best results for their students, and colleges compete for the best collective performance. Recently, the burden of exams has been lightened – students have been allowed to write instead of writing, and theses and exams have been introduced. But AI can have a paradoxical effect, reinforcing the role of old-fashioned handwritten tests. Sometimes the best way forward is backwards.

It would be hard to imagine a system better designed to expose over-reliance on artificial intelligence. A student who got the chatbot to complete the essay verbatim—or who got the bot to read and simply fill out the essay—was immediately exposed for cross-examination as cheating. The purpose of an essay is not just to answer a question and get a grade. Its purpose is to start a conversation that explores your reading comprehension. You fail the reading and have to spend an uncomfortable hour being crushed by a skilled sparring partner.

Tutorials don’t just reveal cheating. They reveal the illusion that AI can do real education. Real education is not just about assembling facts into plausible models. Nor is it about collecting points and awarding certificates. It is an open exploration of ideas and the reward is access to a world of learning and debate.

The great Oxford historian-philosopher-archaeologist R. G. Collingwood captured the distinction between real learning and pseudo-learning created by artificial intelligence in his 1939 autobiography on historical writing. He considered the “scissors and paste” history, consisting of rearrangements of the statements of various authorities, to be pointless. A real historian does not engage in such nonsense. Instead, he focuses on finding “something where the answer is hidden” and focuses on getting “the answer out, fair or foul.” The aim of the tutorials is to get beyond the “scissors and paste” – the world of artificial intelligence – and find out the answer by studying the literature and talking to other researchers.

The (admittedly complacent) history of Oxford University (published in eight volumes by Oxford University Press) describes tutorials as “the connecting line that attached the older to the younger members”. By anchoring senior and junior members, the teaching programs also add a moral element to the education. This moral element is a safeguard against cheating: There is all the difference between cheating an impersonal educational bureaucracy and a teacher you meet personally in both educational and social contexts. But the curriculum is much more than that – “a fitness center for the personality,” as theater critic Kenneth Tynan put it, or perhaps even “medicine for souls,” as Don Kenneth Leys ventured.

The best tutors can act as both role models and moral guardians. They can also act as lifelong mentors, open doors to jobs, act as sounding boards, offer advice and get their patrons out of various pickles.

Opening the doors and removing the pickles highlights the education system’s ability to prepare students for later life and decorate universities. It teaches people the three most important skills they need in most high-profile professions: how to make arguments under pressure, to illustrate big points with vivid facts; how to absorb mountains of information in short order; and how to make fine judgments about the plausibility of various explanations. It also teaches people something that is just as useful outside of your career as inside it: the ability to learn and think independently—to act as your own teacher.

So the AI revolution could have a salutary effect on US education, where a “scissors and paste” approach has taken over even the most elite institutions. American universities emphasize the “wise man on stage” from above (you have to wait until graduate school to develop an intimate relationship with these demigods). The transfer of knowledge is tested with routine tests, usually graded by graduate students, or with multiple-choice questions that can be evaluated by machines.

Every step of this process is open to AI interference. The lectures can be replaced by better ones that are available on the Internet. Artificial intelligence can be used to write essays. Tests can be performed with machines and marked with them. The gradual mechanization of the system by elite professors trying to devote as much of their time as possible to research may have finally reached its Waterloo in the form of artificial intelligence. The only way forward is to increase the human component in education.

An obvious objection to the introduction of tutorials in US education is that they are expensive – instructors must spend at least 12 hours a week teaching, and class-to-student ratios drop to 2:1. But the Ivy League universities make Oxford and Cambridge look poor. They can also afford to waste money on sports facilities and extensive administration, neither of which has anything to do with education, and one of which arguably hurts it.

In-state universities can be better on the money—especially local universities below state flagships that specialize in providing meat-and-potatoes educations to less talented students. But even here, artificial intelligence requires the addition of a human touch. Flagship universities could introduce tutoring programs to reward the most talented students. Local universities must demand that their professors adapt their teaching to the age of artificial intelligence – moving from lectures to seminars and writing more demanding essays.

American universities became world-beating institutions in the late 1800s and early 1900s because they combined the best of two available university systems: Oxford and Cambridge, with their residential colleges and teaching systems, and German universities with their obsession with research. Harvard and Yale introduced houses that functioned like Oxbridge colleges and experimented with the teaching system. Johns Hopkins and the University of Chicago added weight to the research.

The Germanic model eventually won out over the Oxbridge model. Professors were subject to a publish-or-perish system and therefore spent most of their time learning more and more from less. Universities became more hierarchical and bureaucratic: The ambitious academic’s goal was to become a big-name professor who was too busy flying to conferences and raising disciples to meet graduates. Many Oxbridge academics viewed these pampered creatures with envy – Max Beloff lamented that “we keep our best historians tied down to the routine task of giving individual instruction to those who are unworthy.” But the price of such indulgence was that the pastoral side of universities—mentoring students and shaping their moral lives—was either ignored or left to bureaucrats.

This system not only shortchanged undergraduates, who ended up paying more and more for having less and less contact with tenured faculty. It also ended up producing a lot of useless research. Research may be the gold standard of the hard sciences, which ends up not only advancing the frontiers of knowledge, but also producing practical knowledge. But what about literary research, where surely the primary goal is to educate people’s sensibilities rather than produce a new article for an obscure academic journal? What about the proliferation of different kinds of “research” aimed at advancing an ideological agenda rather than advancing knowledge or solving practical problems?

The supposed threat of artificial intelligence should be treated as an opportunity to calibrate American higher education away from the research-centered Teutonic model and back to the people-centered Oxbridge model—and away from producing research and back to the education of thinking. British Prime Minister Harold Macmillan told about the educational philosophy of his ancient philosophy teacher J. A. Smith at Balliol before the First World War. Smith said that “a few – hopefully very few – will become teachers and dons”. Otherwise, what they would learn at Balliol would be useless except for one thing: “you should be able to tell when a man is talking rotten, and that, I think, is the most important, if not the only, purpose of an education.” There is no better technology to teach us to recognize when talking about rot than the education system, and there is no time in history, given the plethora of gray politicians, shady intellectuals, and dubious management gurus all empowered by AI bots, when the ability to spot “rotten” has been more important .

More on Bloomberg’s opinion:

  • CEOs must act, even when AI is oppressing: Adrian Wooldridge
  • Your future AI has multiple personalities: Parmy Olson
  • Artificial intelligence increases productivity. But do the employees benefit?: Nir Kaissar

This column does not necessarily reflect the opinion of the editorial board or of Bloomberg LP and its owners.

Adrian Wooldridge is a global business columnist for Bloomberg Opinion. A former contributor to the Economist, he is the most recent author of The Aristocracy of Talent: How Meritocracy Made the Modern World.

Related posts

Leave a Comment