On Nov. 30, 2022, ChatGPT launched for the public, forever altering the academic landscape.
The sudden, widespread accessibility of the platform presented University administrators with a series of unprecedented challenges: keeping up with this unpredictable technology, educating professors about the benefits and drawbacks of these platforms, and determining guidelines for the ethical use of artificial intelligence (AI) in higher education.
It’s been nearly two years since the release of ChatGPT and Boston College still does not have a University-wide policy that defines acceptable use of AI in the classroom. Whether professors choose to ban the technologies entirely or allow students to freely use them on all assignments, they have complete control over the AI policies they enforce throughout their courses.
Giving professors the agency to embed or expel the use of AI in their classrooms is undoubtedly important. But making sure professors are equipped to make and execute that decision is of equal importance.
Professors—and more importantly students—deserve an education that maximizes the learning environment they agreed to when choosing to come to BC. This can only be done if professors teaching classes are fully capable of creating a fair and thorough classroom experience.
Even without a standardized policy, the University’s academic integrity policy states that any work generated with undocumented or unsanctioned usage of Generative AI (GenAI) platforms is classified as cheating. The constantly evolving nature of AI platforms, however, makes it virtually impossible for professors to determine whether students are using AI on their assignments.
The Center for Teaching Excellence (CTE) provides AI detection resources for professors to utilize, but the organization has acknowledged the unreliability of these platforms, as well as the biases that many of these platforms have against students who do not speak English as their first language.
With these risks in mind, professors must outline and fully justify their classroom AI policies.
Nirmal Trivedi, assistant director for teaching, learning, and technology at the CTE, believes that transparency is the key to developing sustainable AI policies.
“Explain when it’s really critical not to use it and when it’s really critical to use it,” Trivedi said. “Show [students] some of the ways it can be advantageous to success in the class.”
In order for professors to develop thoughtful and specific AI policies, they must have a comprehensive understanding of the platforms their students are engaging with. If they choose to allow AI in their courses, they must fully expose students to its risks, posing them with a learning opportunity.
BC Information Technology Services currently offers optional, instructor-led technology training courses for faculty. These opt-in courses cover a variety of topics ranging from “Boosting Your Efficiency with Generative AI” to “Advanced Prompt Engineering for Generative AI.” The CTE also offers extensive AI navigation resources to guide professors in integrating technology into teaching.
Though BC does not require professors to enroll in these courses, professors have been overwhelmingly receptive to the resources.
“Every time we offer something GenAI-related, it’s packed,” Trivedi said. “The faculty can’t get enough of it.”
BC’s Center for Digital Innovation in Learning established faculty working groups, introducing small groups of faculty members to developments in AI and helping them integrate AI-focused projects into their classrooms.
“Staying abreast of good and bad developments takes continuous updates and ideally being part of a group like this one,” said Catherine Conahan, an assistant professor at the Connell School of Nursing and a participant in the 2023-24 AI working group.
BC must work to meet and exceed the needs of this demand and place a priority on increasing opportunities for professors to learn more about AI. This will not only boost professors’ confidence levels when using AI platforms, but it will also help them develop more transparent classroom policies.
As GenAI further integrates itself into the academic landscape, the University has a vital responsibility to uphold: equipping professors with the resources and information they need to develop AI policies that offer learning opportunities for students.
Like the internet, calculators, or the myriad of other technologies that have seismically shifted our approaches to education, Al may be a powerful tool for good. It may very well positively affect the education of students. Yet BC, as a leader in higher education, must ensure that its implementation is just that—positive. Without a careful and considerate plan, the University may be harming the very values and competence that education should impart.