In 1993, Microsoft founder and tech revolutionary Bill Gates was asked about the World Wide Web. He responded by saying, “The internet? We are not interested in it.”
Human beings, even uber-successful entrepreneurs like Gates, have a tendency to first reject and denigrate new technologies that eventually reap tremendous benefits. Artificial intelligence (AI) is one such powerful technology. Just like the internet, computers, television, and telephones before it, AI has untapped potential to change our lives for the better—no matter how much we resist it at first.
So if Boston College wants students to be prepared for an ever-shifting job market and world, it needs to implement AI in its curriculum and not reject the technology outright.
While my intention is to encourage BC to implement AI education, I cannot diminish or understate the threats that AI may pose to society. The disruption of the job market could pale in comparison to other potential harms of AI. From AI weaponization to political manipulation, we are bound to face serious ethical dilemmas. But by creating new classes with a focus on the ethical and societal implications of AI and integrating AI programs into current classes, BC has the opportunity to prepare students to face these issues imminent in our future.
To learn more about the implications of AI in both an educational and larger societal setting, I talked with Brian Smith of the Lynch School of Education and Human Development, who is an expert in computational learning environments and human-computer interaction. I asked him about BC faculty’s current attitude toward ChatGPT and other AI technology.
In simple terms, ChatGPT is a new AI chatbot that OpenAI made freely accessible online. It has recently made waves in the news because it can produce original responses to essentially any questions a user asks. The weirdest part? Oftentimes, it writes in a tone that is very difficult to distinguish from a human’s. It learned to communicate this way by “training”—collecting and analyzing—significant amounts of human data. This has sparked concerns about plagiarism and academic integrity, as ChatGPT can find homework answers and complete writing assignments with ease.
Smith said that many faculty members feel threatened by the serious risks ChatGPT and equivalent programs pose to academic integrity. Still, he said that many faculty members understand AI is here to stay, and they have more of a “what do we do with it?” mentality. Due to the easy accessibility of AI tools like ChatGPT, I argue that this “what do we do with it?” question is a crucial one.
In reality, we can already do an incredible amount with AI.
Smith pointed out that he had just been using Grammarly to proofread his writing. Just like how ChatGPT uses AI to automate responses, Grammarly (which has been popular for years) uses AI to analyze user’s provided writing and make grammatical suggestions based on vast amounts of language data. Smith also pointed out that AI algorithms are built into our phones and apps more than we realize. Tech tools ranging from iMessage suggestions to Spotify recommendations have all been powered by AI for years, and they continue to serve practical uses in our daily lives. By the end of my interview, I realized the app I used to transcribe our meeting—Otter.ai—is powered by AI.
The point is that AI already surrounds us. ChatGPT is just another step forward for AI that shows us how useful it can be.
So, how can AI help improve our education? Researchers have already outlined some of the most valuable applications of AI in education that can be implemented in the near future.
The first thing AI can do to improve the education of millions is generate natural language processing (NLP) for foreign language practice. The potential for AI in this kind of language learning is fantastic. NLP trains text-to-speech bots to engage in conversation with humans to practice in foreign languages where it may otherwise be difficult to find a speaking partner. It also eliminates the challenge and risk of messing up and embarrassing oneself, which deters many language-learning beginners.
AI can also objectively detect a learner’s strengths and weaknesses and develop personalized learning strategies to help them improve. For example, when studying calculus, AI can evaluate that a student struggles with integrals but is good with derivatives. From there, it can give specific steps or problems to help them learn integrals and focus less on derivatives.
While a bit more controversial, AI can also help detect emotions when given access to brain signals (a little dystopian, I know). From here, AI can provide personalized strategies and instruction based on how a student reacts to certain learning stimuli, potentially offering students more efficient time management and study strategies.
These are the many ways that AI could benefit students, but teachers can also benefit—AI can grade papers and tests equitably and efficiently.
Because BC heavily emphasizes a holistic education that incorporates theology and philosophy, educating students on both the benefits and ethical drawbacks of AI can foster great discussions and debate.
At this point, it could be easy to have the 1993 Bill Gates mentality of “it doesn’t interest us.” But, if BC wants to graduate students ready for the future, it would be doing a disservice to students to not incorporate more AI into our education. BC can use AI to expand language learning, for example. BC can also introduce more AI-related curriculum in the philosophy and law departments to explore difficult ethical dilemmas.
In 20 years, the “AI question” will not be whether or not to use the technology. Rather, it will surround how we can use it to benefit our society while also mitigating its damaging potential. Starting that conversation sooner rather than later is how we can best serve students and society.
Correction (3/20/23, 8:47 p.m.): This article has been updated to accurately reflect that AI language learning assistance is termed “natural language processing,” not “neuro-linguistic programming.”