While grading final essays at the end of last semester, César Baldelomar, a third-year doctoral student in the Boston College School of Theology and Ministry, noticed that something seemed off about one of his student’s submissions.
“One of my final essays last semester just looked very, very much like a Wikipedia entry,” Baldelomar said. “But it wasn’t, and it had no sources.”
The student who submitted the essay, according to Baldelomar, had generated the assignment using an artificial intelligence (AI) program.
The software in question was ChatGPT, a generative AI model that has sparked scores of public debate and media coverage since its release in November. ChatGPT can converse with users, craft unique text that responds to specific prompts, and generate its own images and videos on demand.
“Many of my teachers mentioned AI in their syllabi and talked about [it] on the first day of class,” said Jack Thompson, MCAS ’25. “Some said that they chose to change their syllabus because of it.”
According to educational technology expert Brian Smith—who is the associate dean for research at the Lynch School of Education and Human Development (LSEHD)—ChatGPT uses a very large language algorithm to generate text that is “incredibly convincing” at first glance.
“If I put on a strict technical hat, this is one of the most amazing things I’ve seen in a lot of time,” said Smith, who was also named the LSEHD’s Honorable David S. Nelson Professional Chair in January.
Critics argue that ChatGPT could negatively affect learning by enabling students to submit AI-generated assignments—particularly essays—as their own work.
Smith, on the other hand, said that apprehension about technology’s role in the classroom is not a new phenomenon.
“There’s always been these technologies that when they get to education, people say, ‘Oh, you know, okay, kids can’t use a calculator in class because then they’re not going to learn,’” he said. “‘They won’t know how to calculate the square root by themselves.’ So turns out, nobody wants to know how to do that anyway.”
According to its charter, the mission of OpenAI—the AI research and development company behind ChatGPT—is to ensure that artificial generative intelligence is “safe and beneficial” for all of humanity.
“[AI has] always kind of had this fascination where it was, ‘Could you get a computer to emulate human intelligence, human abilities?’” Smith said. “Part of being intelligent is being able to sense the world and making sense of that.”
Smith said he’ll be participating in a panel on Feb. 2, titled “Chat GPT: Implications for Teaching and Learning,” that aims to provide a better understanding of ChatGPT as well as “ideas for teaching with and around it.”
“It’s not a fear thing,” Smith said. “We need to think about ‘Okay, if this thing exists, what do we do with it? How do we incorporate it into, you know, the educational mission?’”
Some BC professors may adjust their course assignments and teaching methods in response to the growing prevalence of ChatGPT, according to Min Hyoung Song, chair of the English department and director of the Asian American studies program.
These changes might include using a mix of different assignments that are more difficult for students to plagiarize, Song said, such as podcasts, blue book exams, and more frequently, shorter writing assignments.
Song said that while the English department has decided to treat the use of generative AI as plagiarism, no amount of institutional regulation will likely stop determined students from using it.
“And as the technology improves, as it surely will, it’s going to be harder and harder to detect,” Song said. “I also don’t know if it’s such great pedagogy for professors to be spending all their time trying to police or trying to catch students cheating. I don’t think that’s really what our job is.”
Baldelomar, on the other hand, said the software could motivate students and faculty to think more deeply about the writing process.
“I think a part of it is also incentivizing students to really care about writing,” Baldelomar said. “There has to be an inculcation of that desire to write and to write well and to take it seriously.”
Baldelomar said he also sees how AI could be incorporated in a positive way, helping to improve students’ writing and enhance their overall education.
“I’m not a Luddite—I don’t believe technology is bad,” he said. “Something that I thought about for future assignments is having a student actually do [an essay] through the AI and then write it themselves and compare what they miss out, what they could have learned from it, and vice versa.”
Thompson recalled some of his professors being more dismissive of AI software, he said, while others were more concerned.
“I had some teachers say that ChatGPT wouldn’t work because it wouldn’t recreate course material at a college level,” Thompson said.
Song agreed that most instructors would easily be able to detect an AI-generated essay.
“It’s probably not a pervasive problem,” he said. “And right now, the technology is not that great, to be honest. I mean, it can write flawless prose, but the content is usually really mediocre.”
Smith said that much of the recent fear regarding AI—and technological advancement in general—is overblown.
“Have you seen machines?” Smith said. “Has there been a threat of computers or any machine trying to take over humankind in the past that would give us reason to believe that this thing [will]? The toasters didn’t rise up. You know, the Roombas aren’t running around trying to attack us.”
ChatGPT is only growing in popularity, so much so that the system crashed multiple times in December after more than one million people signed up to use the chatbot in just five days. As a result, OpenAI is now restricting the number of people who can use the program at once.
But once this number is put into perspective, one million users are actually not as substantial as it may seem, according to Smith.
“If I worked for Ticketmaster, I would say, ‘Actually, a million people is not enough to crash the system,’” he said. “ChatGPT got nothing on Taylor Swift.”
OpenAI first caught the public’s attention in early 2021 following the release of the generative AI program called DALL-E.
Like ChatGPT, DALL-E produces content—specifically images—based on machine learning models that are trained by an algorithm to recognize certain patterns in a set of data. Once the model has been trained on an initial set, it can then evaluate new data and make predictions based on these learned patterns.
The real threat of generative AI lay outside the realm of education, as the technology could potentially generate misinformation on a large scale, according to Smith.
Others, like Song, fear that generative AI technology may render many white-collar jobs obsolete.
“I don’t think this technology is being developed so students can plagiarize their papers better,” Song said. “I think this technology is being developed because it’s an opportunity to automate professions which require high salaries.”
At some point, Smith said, humanity will be forced to determine AI’s role in society and learn how to construct a new reality around it.
“There’s a tremendous amount of what people would call technological determinism: ‘Here comes the machine, and it will change the ways that we behave simply by its presence being a machine,’” Smith said. “There are no technological problems. There are only social problems and cultural problems and economic problems. There’s only human problems.”