Artificial intelligence (AI) cannot threaten its human creators because it does not have mental capacity, according to Dona Sarkar, the director of technology for Microsoft Accessibility.
“People think that AI has a mind of its own and that it’s going to take over the planet, it’s going to take over our jobs, and it’s going to kill us,” Sarkar said. “None of these are true. It’s a bunch of zeros and ones and funky math. But mostly, it’s a big, fancy autocomplete.”
On Nov. 7, the Women’s Network at Boston College hosted a panel discussing the role of women in AI. Sarkar was joined on Zoom by CEO of Clearbrief Jacqueline Schafer, CEO of InsightCircle.ai Gloria Felicia, Head of AI Investment Management and Planning at Fidelity Lisa Huang, and CEO of AI for the People Mutale Nkonde.
Schafer said that people with diverse interests should get involved in AI, not just those interested in computer science. She explained that companies often have access to the models but do not understand how AI can help other industries, such as health care workers, lawyers, or bankers.
“One other misconception that I could speak about is that it is too late to get involved with [AI], and it’s moving too fast and if you are not a computer science genius, you are not important to how it is evolving,” Schafer said.
Sarker explained that the public has a tendency to blame AI when it is used for bad reasons—rather than holding the people who created it responsible.
“We can’t remove that accountability from people and say that AI is doing things on its own without human interest, without human interaction, or human interference in the loop because that’s just not true,” Sarkar said. “It is incapable of doing that.”
Sarkar also explained that users must be very specific when prompting AI to do something.
“Treat prompting as you would your intern from last summer,” Sarkar said. “That is it. It’s how you would talk to a real person who you’re asking to get information for you or do something for you. How would you instruct them step by step?”
Big businesses often spread misinformation about the capabilities of AI, especially by suggesting that it needs users’ data to train itself, Felicia said.
“A lot of times, the big corporations, especially in corporate America, said that they thought that AI will always use your data to train data sets.” Felicia said. “That’s not always the case and you can always train your own model.”
Nkonde added that although AI is helpful, its foundations are rooted in white supremacy. She explained that its initial function was for members of Aryan Nations to find each other and weed out people of other races.
“I noticed in about 2015, when Google Image was in beta, which meant people weren’t using it, it labeled two black people as gorillas, and that’s it set me down a research rabbit hole that I’m really still in,” Nkonde said.
The panel wrapped up with each panelist sharing what they think is the most crucial takeaway from their careers in AI. Nkonde shared that she believes the future of AI development is uncertain.
“The future belongs to those who create it,” Nkonde said. “So don’t look to get into something, look to create something and then it will come to you.”