How Salesforce plans to bring trust to generative AI systems for business
The cloud-based artificial intelligence platform could set an example for implementing safe and private generative AI systems for enterprises.
The generative artificial Intelligence revolution has created a paradigm shift in business, helping companies move past conventional barriers to a world of new possibilities. While experts believe new large language models have immense potential, many executives and IT managers are still concerned about the technology’s viability, implementation challenges, security risks, and privacy dangers.
Adam Caplan, a Senior Vice President of Salesforce AI, believes these concerns are addressed in the new Salesforce “AI trust layer,” a cloud-based artificial intelligence platform for business. The AI team follows a strict protocol to ensure the complete confidentiality of all data that comes in, said Caplan in an interview with ZDNet. Unlike datasets trained on a corpus of internet data, “we adopted robust measures to mitigate errors, inaccuracies, and toxicity in the data,” he said.
The proprietary model uses ‘Prompt Templates’ to provide specific instructions for the AI, creating a more contextualized response. This enhances the model’s effectiveness and frees employees from fiddling with the AI to achieve quality results. “Because the AI is tuned for each customer and behind a security layer, employees can focus on work,” Kaplan said.
Salesforce aims to integrate the significant benefits of AI in enterprises and simultaneously tackle the inherent challenges. By creating an open ecosystem approach, the company plans to partner with various large language model providers to cater to diverse enterprise use cases.
As businesses adopt AI, they must also ensure they build and maintain digital customer trust. A survey by McKinsey revealed that consumer faith in cybersecurity, data privacy, and responsible AI could promote business growth. Consumers value clarity about how their data is used, and many would consider switching brands if a company's data practices are unclear. Many consumers will only buy from companies known for protecting consumer data, and a substantial proportion would stop doing business with a company if they learned it was not protecting its customers’ data.
Interestingly, the study also found that consumers express a high degree of confidence in AI-powered products and services compared with products that rely mostly on humans. More than two-thirds of consumers trust products or services that rely mostly on AI, the same as, or more than, products that rely mostly on people.
Kaplan suggests adopting generative AI in businesses will rely on trusted models that “eliminate falsehoods and hallucinations.” Increasing trust in AI could produce a “significant transformation in how businesses operate” because trust is the foundation.
“All the data that comes into [the model] and then go to the LLMs, none of that data is used to train the model,” Kaplan said. “It's completely confidential, it's completely private. It's our job to protect our customers, our brands, and their customers. And we treat that with utter importance. It's our number one value, trust."
Despite the constant debates about AI’s impact on the economy, Kaplan believes that the focus must remain on AI’s practical applications in the business world and not generalized fears of super-intelligent robots. The excitement about the future of AI should not obscure the real, tangible benefits it can bring to businesses today. And with a growing interest from the C-suite, the future of AI in business is looking brighter than ever before.