Kathy Baxter: Here's what could go wrong if we don't take AI safety seriously
The high price of ignoring AI safety - six reasons you should care about fixing AI now.
The development of artificial intelligence is no longer the domain of a single company or organization. Instead, it's a shared endeavor that involves government entities, private industries, nonprofits, and academic institutions. Kathy Baxter, Principal Architect of Ethical AI Practice at Salesforce, says that this collective approach is vital because artificial intelligence will have a global impact. It's incumbent upon all stakeholders to collaborate and ensure AI is developed and used as safely as possible.
"We are already seeing today many harmful outcomes [of AI]. They will only continue if we don't put safeguards in place."
Some of the negative consequences of getting AI safety wrong are:
Racial profiling and wrongful arrests: Misidentifications due to inaccurate facial recognition systems can lead to innocent individuals, particularly people of color, being wrongfully arrested. AI systems used in criminal justice can make faulty recommendations, impacting crucial decisions such as parole outcomes.
Exploitation by malicious actors: AI systems that lack robust safety measures can be exploited by malicious actors for harmful purposes. This could range from cybercriminals using AI to commit fraud or launch cyberattacks to more extreme scenarios where AI weapons are used in warfare or terrorism.
Disinformation and manipulation of information: AI systems, especially those that leverage natural language processing and deep learning, can be used to generate deepfakes, fake news, and misleading content at scale. This can exacerbate the spread of disinformation, manipulate public opinion, and undermine trust in institutions. If unchecked, it could pose significant threats to democracy, social cohesion, and international relations. For example, malicious actors might use AI during an election to create and distribute false information about candidates, influencing voters and altering the election outcome. Similarly, deepfakes could be used to create fraudulent videos of public figures, causing confusion and distrust.
Deepfakes: Deepfakes refer to AI-generated images, videos, or audio files that are so realistic that they are hard to distinguish from genuine ones. The term "deepfake" is a portmanteau of "deep learning" (a type of machine learning) and "fake." AI systems can be trained to replicate the likeness and voice of individuals, leading to the creation of realistic but false representations.
Erosion of trust: As deepfakes become more prevalent and convincing, they may lead to a general erosion of trust in digital media. This could make it increasingly difficult for individuals to discern truth from falsehood, undermining trust in journalism, government communications, and other important sources of information.
Existential risk: In the longer term, there is the risk of creating superintelligent AI that surpasses human intelligence in all relevant aspects. If such an AI is not properly aligned with human values, it could pose an existential risk to humanity. The AI might pursue goals that are detrimental to human well-being, and with its superhuman capabilities, it could prevent us from intervening or stopping it. This is often called the "alignment problem" in AI safety research.
The creation of the NIST framework involved collaboration between over 240 experts from government, private industry, academia, and nonprofits. Baxter believes This cooperative effort model should be replicated for future advancements.
Baxter also warned about the consequences of failing to manage AI risks effectively. The misuse of facial recognition and erroneous AI recommendations are just a few examples of the potential pitfalls. The future of AI, she asserts, must be grounded in the here and now, with a strong focus on current safety measures while also preparing for future challenges. This dual focus ensures we continue to advance responsibly and safely.
The repercussions of getting AI safety wrong are not limited to the present. The specter of Artificial General Intelligence looms large on the horizon. While AGI – smarter than human systems – may still be a matter of debate, the associated risks underline the importance of not sacrificing present safety for potential future gains.