Salesforce Announces Two New AI models to Power Agentforce

Salesforce announced new AI models, including xGen-Sales, a proprietary model trained and designed to power autonomous sales tasks with Agentforce, and xLAM, a new family of Large Action Models designed to handle complex tasks and generate actionable outputs. Together, these models developed by Salesforce AI Research will allow Salesforce customers to quickly set up and deploy autonomous AI agents that take action, driving unprecedented scale.

By fine-tuning xGen-Sales to increase accuracy for relevant industry tasks, it is able to deliver more precise and rapid responses, automating sales tasks such as generating customer insights, enriching contact lists, summarizing calls, and tracking the sales pipeline. This model enhances the capabilities of Agentforce sales agents, allowing them to autonomously nurture pipeline and coach reps with greater accuracy and speed. xGen-Sales’ abilities have already eclipsed other much larger models, according to Salesforce’s own evaluations.

xGen-Sales is a step toward the next generation of language models called Large Action Models (LAMs). In contrast to LLMs (Large Language Models) that require frequent human involvement and are mostly used for generating content, LAMs specialize in function-calling, which is the ability to execute capabilities within other systems and applications. In other words, they’re able to trigger the actions needed for AI agents to independently perform tasks for people.

In addition to xGen-Sales, Salesforce AI Research has delivered a new LAM family called xLAM. xLAM models offer lower costs, faster performance, and greater accuracy than many of the larger and more complex models that are available today. For example, the xLAM-1B model has outperformed larger and more expensive models despite consisting of just 1 billion parameters, which are the variables that models learn to generate results and insights. xLAM-1B, specifically, is a non-commercial, open-source model to help advance the science with the research community, while Salesforce uses a much more performant model for Agentforce.

Why it matters: Organizations need AI agents that can take action for employees, augmenting their work so they can focus on more strategic priorities. These models not only understand the jobs they’re intended to handle but also know their own limitations, so agents using them will recognize when it’s time to hand a task over to a human being for quality assurance and completion. Salesforce recently launched its LLM Benchmark for CRM, providing organizations the opportunity to navigate the many models on the market and compare LLMs for CRM use cases.

“Building and training your own AI models can be time-consuming, costly, and incredibly frustrating,” said Salesforce Chief Scientist Silvio Savarese. “With Agentforce, we’re able to deliver appropriately sized models, built specifically for your business with your data to drive outcomes.”

Behind the news: To train the xLAM models, Salesforce AI Research created APIGen, a robust, proprietary pipeline for generating high-quality synthetic data. Positive results were almost immediate, with xLAM 8x22b capturing a No. 1 ranking  on the Berkeley Leaderboards for function calling, surpassing GPT-4, according to Salesforce’s own evaluation. The xLAM-8x7b model is ranked sixth. Both beat models that are many times their sizeThe four language models in the xLAM family include:

  • Tiny (xLAM-1B): The “Tiny Giant” features 1B parameters. Given the model’s compact size, it is most suitable for on-device applications where larger models are more impractical. The xLAM-1B can be used to create powerful and responsive AI assistants that can run locally on smartphones or other devices with limited computing resources.
  • Small (xLAM-7B): The 7B model is designed for swift academic exploration with limited GPU resources. It can be used to perform planning and reasoning tasks for agentic applications in a light-weight environment.
  • Medium (xLAM-8x7B): An 8x7B mixture-of-experts model, the 8x7B is ideal for industrial applications striving for a balanced combination of latency, resource consumption, and performance.
  • Large (xLAM-8x22B): The 8x22B is a large mixture-of-experts model that allows organizations with a certain level of computational resources to achieve optimal performance.

The analyst perspective: “Open-sourcing of LAM models is a game-changer,” said Rena Bhattacharyya, Chief Analyst and Practice Lead, Enterprise Technology & Services at GlobalData. “Salesforce’s ‘Tiny Giant’ xLAM-1B exemplifies how advanced, small, action-oriented AI can revolutionize business efficiency and innovation, making high-performance AI accessible to a broader range of companies. Salesforce continues to be a leader in accelerating AI adoption across sectors.”

The Salesforce perspective: “We envision a future in which sellers are augmented by AI to help them drive selling efficiency, freeing up precious time to focus on their customers,” said MaryAnn Patel, SVP, Product Management, SVP, Product Management at Salesforce. “The xGen-Sales model is purpose built to help companies build generative AI solutions that will augment the work of their sales teams with Agentforce.”

Salesforce AI Research is Salesforce’s artificial intelligence research lab, which develops new technological breakthroughs in the field. The team comprises researchers, engineers, and product managers working to shape the future of AI for businesses via foundational research that directly informs product development.

Availability:

  • The non-commercial, open-source version of the xLAM suite of LAMs is available on Hugging Face for community review and benchmark testing. A significantly more advanced, proprietary version is powering Agentforce.
  • xGen-Sales recently completed a pilot and will be generally available soon.