Artificial intelligence systems trained on internet data could cause societal harm
There is an urgent need for developers to build and deploy systems to address AI ethics and bias. Salesforce’s Kathy Baxter explains what’s wrong and how developers can fix it.
AI developers must move quickly to develop and deploy systems that address algorithmic bias, said Kathy Baxter, principal Architect of Ethical AI Practice at Salesforce. In an interview with ZDNet, Baxter emphasized the need for diverse representation in data sets and user research to ensure fair and unbiased AI systems. She also highlighted the significance of making AI systems transparent, understandable, and accountable while protecting individual privacy.