From the Supreme Court’s decision to strike down race-based affirmative action to nationwide declining enrollment rates, universities like Boston College confronted several major upheavals this year.
Perhaps the last thing universities needed was a generational, seismic technological disruption that many said could change the future of education for decades to come.
But generative artificial intelligence (AI) emerged as a societal talking point. This new technology’s most widely known form is ChatGPT, a generative AI released in November of 2022 known for its ability to type out information and analysis in response to any prompt. ChatGPT brought along a host of concerns on college campuses, with many administrators and professors expressing uncertainty, fear, strained optimism, or anxiety regarding this new technology.
For every headline discussing the pitfalls of AI tools, there are half-wondrous, half-foreboding stories detailing how ChatGPT can ace a Harvard-level essay when prompted to do so.
So with many people left in the dark about the significance of this ambiguous technology, the question becomes, how has academia handled its collective anxiety about ChatGPT? And how should academia react to changes in AI’s capabilities?
BC’s official literature regarding AI encourages professors to address the subject from the get-go.
The Center for Teaching Excellence’s website gives two possible sample statements for professors to insert into their syllabuses. There’s the strict, “Any work submitted using AI tools will be treated as though it was plagiarized,” juxtaposed with the more lenient, “If any AI-generated content is used for your assignments, you must clearly indicate what work is yours and what part is generated by the AI. In such cases, no more than 10% of the student work should be generated by AI.”
Instead of simply banning all AI use, the center recommends that professors engage with the technology and its limits.
“In the long run, instructional responses that engage the technology and its limits — rather than seek to simply ban them — promise to be more effective ways to meet learning goals across disciplines,” the University’s website reads.
But, student use of AI has led some professors to question whether ChatGPT can ever truly be at home in the classroom. These professors opt to move away from assigning take-home papers and toward assigning more in-class exams. Gerardo Blanco, the academic director for the Center of International Higher Education, said AI disrupts how professors approach assessing their students.
“How we evaluate our kids is really, really subjective at times, and we really don’t know the right way forward—and [AI] is quite a disruption,” Blanco said. “I think by definition, exams are not authentic assessments. In the sense that when you graduate and go to your jobs, you’re not going to be taking an exam, right? You’re going to have to demonstrate what you know, through your ability to produce things.”
But across BC, and universities as a whole, opinions on how exams should be adapted differ depending on where you look, according to Chris Glass, a professor of the practice in the department of educational leadership and higher education at BC.
“[Academia] gets a lot of criticism for not being innovative,” Glass said. “That said, there are parts of the university that are slow and bureaucratic. Universities are large, complex organizations, and they’re not single entities.”
Glass said that approaches to AI are varied within different facets of BC.
“We’re not really a university—we’re many different types of organizations,” Glass said. “What parts of our institution are going to be able to respond quickly to this phenomenon, and what parts of our institution will respond slowly but in ways that are really about kind of grounding in values that are really important to a liberal arts education?”
And, crucially, most people don’t want to repeat a historical error by attempting to resist the changes brought about by new technologies. Nobody wants to emulate the teachers one reads about in articles from the 1970s—the kinds that say with misplaced confidence that “the calculator will never have a place in my classroom!”
Blanco is not too concerned about these attitudes forming in academia, he said.
“Higher education institutions have been resilient because they tend to resist change,” Blanco said. “But when confronted with the reality that change is the only option, we usually figure out a way forward.”
Ronnie Sadka, the senior associate dean for faculty at the Carroll School of Management, said he does not see BC’s faculty as resistant to innovations like ChatGPT.
“I don’t necessarily think of faculty as this kind of old school—that there’s no innovation,” Sadka said. “On the contrary, many of our faculty have made significant impact on their fields of interest, which manifests their ability to innovate and think outside of the box.”
But opinions about AI’s role in education are diverse among BC faculty members, as some express their doubt that it could ever have a place in the classroom. Betty Lai, an associate professor in counseling psychology at BC, said there are psychological reasons why academics tend to feel anxious about new technologies.
“One of the things that we know in psychology is that we each develop what we call schemas or ways to understand and organize the world,” Lai said. “And something like AI really disrupts our existing schemas, our ways of understanding how the world functions, and that’s why it can be really challenging for somebody who’s a little bit older.”
Lai pointed to the generational divide that dominates the technological debates of the AI age. These schemas explain the stereotypes about young people harnessing new technology better than older people, she said.
These schemas also influence young people’s understanding of the internet. Younger generations have accepted technological innovations as natural, Lai said, while older people have struggled to adapt, which causes friction, and in this case, anxiety about AI.
One of Lai’s specific areas of expertise is how young minds adapt and change in relation to massive ecological disasters, often caused by climate change. Lai said there might be a legitimate analogy between this and how the minds of adults adapt to a change like AI.
“When you’re younger, you’re used to the world constantly being disrupted and how you understand the world,” Lai said. “So, for example, when you’re very little, you learn to walk on flat ground, but all of a sudden, you encounter a big challenge like stairs, and you have to rethink how movement works.”
As for the minds of professors uncertain about AI, Lai said their existing schemas are seriously shaken up by these technological changes.
“I think that that’s where the anxiety comes from—when you have a schema and a sense of how the world works, but that’s been disrupted because now you have to rebuild what the connections are and figure out and understand where those connections should be,” Lai said.
As Lai said, society’s thinking needs to be re-evaluated because AI can work in ways that nobody anticipated. AI’s capabilities go beyond what some people can even imagine, Glass said, because AI pulls from a vast wealth of information.
“I can point it to any specific essay, improve this business logo, and it can draw on the right knowledge,” Glass said. “[AI] has watched every video, read every book.”
Now that a software holding such an unthinkable amount of knowledge is available on every single student’s computer, can any class be taught in the same way? Those who do not adapt to embrace AI often fear they will be left behind, Glass said.
“AI will not replace you but somebody who knows how to use it will,” Glass said, quoting what he described as a popular mantra.
Among all of AI’s ambiguity, Sadka said there is hope that if any community can adapt to the technology well, it is academia—he said he sees a bright future ahead.
“Change is a very hard thing,” Sadka said. “It’s hard in any organization. But the type of people we have here to begin with are the best.”