Thierry Nicault of Salesforce explains that organizations are leveraging AI to enhance customer experiences by integrating structured and unstructured data. Using Retrieval Augmented Generation (RAG) with large language models ensures personalized, accurate insights.
Every organization is racing to unlock the power of AI to improve its sales and service experiences. But great AI requires data.
Traditionally, companies have handled data that is found in structured formats, with rows and columns, including customer engagement data gathered through CRM applications. Every business also has a huge amount of information trapped in “unstructured” data, in formats such as documents, images, audio and video recordings. This unstructured data could be highly valuable, providing businesses with AI insights that are more accurate and comprehensive because they are grounded in customer information.
Many organizations want a holistic customer view, but have lacked the technical ability to see, access, integrate, and make use of their unstructured data in any trusted way. With the power of large language models (LLMs) and generative AI, they can now do just that.
To win in the AI era, successful organizations will need to build integrated, federated, intelligent, and actionable solutions across every customer touchpoint while also reducing complexity.
This starts with the capability to effectively tap into unstructured content, gather knowledge, index the data efficiently, and pull insights from every department.
Helping AI to better know and serve customers
When a customer needs help with a recent purchase, typically they start the conversation with the company’s chatbot. For the experience to be both relevant and positive, the entire exchange needs to be grounded in that customer’s data, such as their recent product purchase, warranty information, and any past conversations they’ve had. The chatbot should also be tapping into company data, such as the latest learnings from other customers who have bought similar products and internal knowledge base articles.
Some of this information might reside in transactional databases — structured information — while the rest might be in unstructured files, such as warranty contracts or knowledge base articles. Both types of data need to be accessed, and the right data needs to be utilized. If not, the exchange with the chatbot will be at best frustrating and at worst inaccurate.
Obtaining the best and most accurate AI responses requires augmenting LLMs with proprietary, real-time, structured and unstructured data from within a company’s own applications, warehouses, and data lakes.
An effective way of making those models more accurate is with an AI framework called Retrieval Augmented Generation (RAG). RAG typically enables companies to use their structured and unstructured proprietary data to make generative AI more contextual, timely, trusted, and relevant.
Ensuring relevancy, whatever the scenario
Combining all your customer data, structured and unstructured, into a combined 360-degree view will ensure customers have the most relevant information for any enterprise scenario.
Financial institutions, for example, can use it to provide real-time information on market or financial data to their employees, who can take that information and blend it with a customer’s own unique banking needs to give them actionable advice based on their situation.
Many companies are exploring the use of RAG technology to improve internal processes and provide accurate and up-to-date information to advisors and other employees. Offering contextual assistance, ensuring personalized support, and continuously learning will improve efficiency decision-making across their organization.
For organizations preparing their data for AI, first, they have to know where all of their data is and understand its quality – and whether it is good enough for their generative AI models. Second, they have to ensure their data is fresh, relevant, and retrievable, so they can combine structured and unstructured data for the best outputs. And thirdly, they have to activate that data across their applications and build the right pipelines so RAG can pull that data when prompted and provide the answers they need.