Skip to content

AI Agents: Salesforce’s five-level Agentic Maturity Model

PODCAST: explore various facets of AI agents and their applications, with a particular focus on their current capabilities and future potential. One article introduces Salesforce’s five-level Agentic Maturity Model, which categorizes AI agents from basic rule-following to complex, multi-agent orchestration, aiming to provide a clearer understanding amid industry hype. Another source details Google’s Gemini Code Assist, highlighting its improved coding capabilities with Gemini 2.5 and the diverse tasks its agents can perform in software development, from code generation to documentation. Finally, an examination of Claude AI’s usage among college students reveals its common applications for academic tasks like problem-solving and content creation, acknowledging both its benefits and potential for misuse, such as facilitating cheating. Collectively, these sources offer a comprehensive look at the evolving landscape of AI agents, their practical uses, and the ongoing discussions around their adoption and impact.

AI agents and AI chatbots are distinct types of artificial intelligence, primarily differing in their autonomy, reasoning capabilities, and ability to execute tasks.

Here’s a breakdown of each:

AI Agents

  • Definition: An AI agent is a software tool that can perform tasks autonomously for you. They are designed to carry out tasks without needing instructions for every individual step. They are “goal-seeking” and possess the “smarts to think about what is needed to perform the task” and “figure out what tools are needed”.
  • How they work: AI agents combine various AI capabilities, including large language models (LLMs), reasoning, natural language processing, and long context windows, to perform tasks intelligently and adaptively. They rely on human-delivered instructions, environmental triggers (like receiving an email), and their own reasoning to decide what needs to be done and when.
  • Capabilities:
    • They can execute tasks on your behalf.
    • They can identify when they are needed.
    • They can make their own inferences by using the context from information around them, which requires a higher level of reasoning.
    • They can ask for more information if required to perform a task.
    • An example includes tasks ranging from simple email sending to complex sales deal closures.

AI Chatbots

  • Definition: An AI chatbot is primarily a dialogue system designed to have conversations with users.
  • How they work: They provide answers to questions by referring to their existing training data.
  • Capabilities:
    • They can only perform tasks they were predetermined to tackle.
    • They focus on feeding you an answer from a predetermined subset of information rather than executing tasks independently.

Key Differences and Contrast The fundamental distinction lies in their ability to act and reason independently beyond pre-programmed responses:

  • Autonomy and Task Execution: AI agents can execute tasks on your behalf and autonomously deduce what actions are needed. AI chatbots, on the other hand, are limited to providing information or solutions based on their training data and predetermined functions; they do not perform tasks on your behalf.
  • Reasoning Capabilities: AI agents possess reasoning capabilities to make inferences from context and determine actions, enabling higher-level assistance. AI chatbots primarily retrieve and present information, lacking this advanced reasoning to adapt to novel situations or gather additional context beyond their direct query.
  • Interaction Model: An AI chatbot is mainly for conversation and providing answers. An AI agent not only converses but also actively “does things on your behalf”. For example, a shopping chatbot might list dinner items, but an AI agent could analyze your fridge inventory, dietary preferences, and calendar to coordinate grocery orders.
  • Scope of Assistance: AI agents offer significantly higher levels of assistance for individuals and businesses due to their ability to execute tasks and adapt.

While AI agents represent a step towards more advanced AI capabilities, they are still AI models and can exhibit traditional pitfalls like hallucinations or outputting false information, requiring proper safeguards and monitoring.

Salesforce is introducing its Agentic Maturity Model, a framework designed to define key stages of AI agent adoption and capabilities. The purpose of this model is to provide a common vocabulary for evaluating AI agent offerings from various vendors in the market, helping to cut through the hype surrounding AI agents. It emphasizes a thoughtful, phased approach to scaling AI agents effectively across a business, which is crucial for long-term success.

The Agentic Maturity Model outlines five distinct levels of AI agent capabilities, progressing from basic scripts to complex teams of agents working in concert:

  • Level 0: Fixed rules and repetitive tasks
    • Salesforce describes this level as “automation of repetitive tasks using predefined rules, with no reasoning or learning capabilities”. An example of this is customized email filters.
    • Initially, the author questioned why Salesforce would include basic repetitive tasks in a model for AI agents, but later found it made sense as a starting point. These tasks, like email rules or social media updates, are automated and save time, even without “intelligence”.
    • Adding intelligence to these tasks, such as email rules that can determine and flag press pitches directly in one’s area of focus, would elevate them to a higher level, likely Level 2.
  • Level 1: Information retrieval agents
    • These agents are defined by Salesforce as those that “go out and pull in information and, as a result of that information, recommend actions”.
    • Examples include a troubleshooting agent that searches for a problem’s fix or a shopping agent that compares offerings and prices to make recommendations.
    • A significant limitation at this level is the need for agents to access the necessary information, often within their host ecosystem. For instance, Microsoft Copilot wouldn’t search your Google Docs library or Notion database. Thus, Level 1 agents primarily pull information from sources within their host ecosystem.
  • Level 2: Simple orchestration, single domain
    • This level addresses the ecosystem issue by specifying that agentic activity takes place in a siloed data environment, meaning all data used is stored and available from one environment.
    • Notion’s AI serves as a perfect example, deriving knowledge from notes and documents within a user’s Notion archive. It could act on articles within that archive but not on external data.
    • “Simple orchestration” implies low-complexity tasks. An AI at this level could, for example, create a list of articles within that siloed data environment and organize them, but it couldn’t perform complex API connectivity to relate those articles to external deep research.
  • Level 3: Complex orchestration, multiple domain
    • This level begins to fulfill the promise of agentic AI, described by Salesforce as “autonomously orchestrate multiple workflows with harmonized data across multiple domains”. This means the application can integrate data from different ecosystems or sources using other systems without breaking.
    • Achieving this is “very hard”. Architecturally, there are two main choices:
      1. Using a series of APIs to communicate between systems via microservices. This requires cooperation from all information ecosystem vendors, which often results in “holdouts” and “holes” in implementation. An example given is social posting services that often can’t post to personal Facebook profiles due to permission limitations.
      2. Screen reading and screen clicking, where the AI interacts with a browser like a human. While this bypasses API limits, it is “highly unreliable” because web pages change constantly, necessitating frequent code re-learning.
    • Ultimately, Level 3 can work if all domains and workflows are part of a cooperative ecosystem, but outliers and unimplemented parts will always exist.
  • Level 4: Multi-agent orchestration
    • Salesforce defines this as “Any-to-any-agent operability across disparate stacks with agent supervision”.
    • This involves teams of AI agents working together to accomplish complex tasks across various infrastructures.
    • The author provides a personal example of an “AI Editor system” where multiple agents had specific roles (scanning news, writing articles, identifying images, managing editing) and communicated with each other. In Salesforce’s Level 4, data sources do not need to be normalized or interoperable, and agents can be from different orchestration classes.
    • The author believes Level 4 agentic AI will primarily be seen in enterprise-level implementations where IT teams can control the project’s scope. When successfully implemented, it can have an “enormous force-multiplying effect”.

The author generally likes Salesforce’s five levels and their definitions, finding them to fairly represent the stages of AI agent capabilities and tasks. However, they suggest alternative, “stickier” names for the “Agentic Maturity Model,” such as the Agent Intelligence Scale or Agent Power Index.

Based on the provided text and our conversation, there is no information to suggest that Salesforce is creating “Digital Clones”.

Salesforce is focused on developing and implementing AI agents through its Agentic Maturity Model. This model is a framework to categorize the stages of AI agent adoption and capabilities, aiming to provide clarity in the market and guide a phased approach to scaling AI agents across a business.

The concept of an AI agent, as defined in the sources, is a software tool that performs tasks autonomously based on human instructions, environmental triggers, and its own reasoning to deduce what needs to be done. While they can execute tasks on your behalf and possess reasoning capabilities to make inferences from context, they are distinctly different from comprehensive “digital clones.”

Key aspects from the sources that differentiate AI agents from a “digital clone” concept include:

  • Task-Oriented vs. Comprehensive Replication: AI agents are described as “goal-seeking” tools that have the “smarts to think about what is needed to perform the task” and “figure out what tools are needed”. Their purpose is to automate and streamline workflows, from simple email sending to complex sales deal closures, or even programming tasks like generating code, migrating code, or performing code reviews. This is different from creating a full replication of a person’s intelligence, personality, or consciousness.
  • Confined Actions: AI agents can only perform actions within the confines of what they are allowed to do. They are “not quite taking actions into their own hands but rather doing what they are told to do”. This indicates a controlled and limited scope of operation, unlike a truly autonomous, self-replicating “clone.”
  • Distinction from AGI: The sources explicitly differentiate AI agents from Artificial General Intelligence (AGI), which is an AI system capable of performing at human levels without assistance. While AI agents offer a glimpse of independent task execution, AGI is much more advanced and still “quite a bit away”. The article emphasizes that AI agents are still AI models and can exhibit pitfalls like hallucinations or outputting false information.
  • “Digital Employees”: The closest analogy used in the text is thinking of AI agents as “digital employees” that can manage “low-touch or low-complexity interactions” or even assist with higher-level tasks, thereby having an “enormous force-multiplying effect” in enterprise-level implementations. This metaphor highlights their role as automated assistants within defined business processes, not as complete digital replicas of human beings.

In summary, Salesforce’s Agentic Maturity Model describes a progression of AI agent capabilities from basic rule-based automation (Level 0) to multi-agent orchestration (Level 4), where teams of AI agents work together across various systems. This framework focuses on enhancing automation, data integration, and task execution within business contexts, rather than creating “digital clones” of individuals.

Texas House Bill 2060 introduces several mechanisms that protect privacy through oversight of AI systems used by state agencies. Texas House Bill 2060 doesn’t use the term “privacy masking” explicitly, but it introduces several mechanisms that protect privacy through oversight of AI systems used by state agencies. Here’s how it addresses privacy concerns:


🛡️ Privacy Safeguards Embedded in HB 2060

1. Automated Decision Systems Inventory Reporting

State agencies must submit detailed reports by July 1, 2024, describing every AI system they use. These reports must include:

  • Types of data inputs used by the system
  • How data is generated, collected, and processed
  • Whether the system generates sensitive data
  • How data is securely stored and whether it’s shared

This ensures transparency around data handling and flags systems that might compromise privacy.


2. Impact Assessment on Residents

Agencies must evaluate:

  • Whether the AI system affects constitutional or legal rights
  • The privacy risks posed by automated decisions
  • Whether the system has been tested for bias or unintended discrimination

This aligns with Ben Buchanan–style governance by requiring agencies to consider how AI might erode privacy or civil liberties.


3. Council Oversight and Recommendations

The AI Advisory Council is tasked with:

  • Reviewing all agency reports
  • Assessing the impact of AI on liberty, finances, and privacy
  • Recommending policies to protect residents from privacy diminishment

This includes suggesting rules for data minimization, consent protocols, and bias mitigation.


4. Ethical AI Development

The bill encourages:

  • Development of ethical AI systems
  • Workforce training in privacy-conscious AI design
  • Inclusion of experts in constitutional law and ethics on the council

This builds a foundation for privacy-aware innovation across Texas government.


If you’re exploring how this intersects with digital clones or biometric AI, HB 2060 could be a springboard for future legislation requiring explicit masking or anonymization of personal identifiers. Want help drafting a speculative amendment or visualizing how this might evolve into a full privacy framework?

1 thought on “AI Agents: Salesforce’s five-level Agentic Maturity Model”

  1. Partnerships with Generative AI Video Startups
    Salesforce Tavus integration digital replica
    Tavus face voice cloning Salesforce
    Salesforce third-party digital clone partnerships

    I have found several relevant webpages on Tavus and its applications, some of which offer insights into Salesforce’s partnerships and the potential of digital replicas. I will open the most relevant webpages to explore this topic further.

    There are multiple interesting results related to Salesforce Agentforce and digital agent clones. I will open relevant webpages to explore detailed information about the platform and its features.

    There are several interesting webpages about Salesforce AI Cloud and its digital replication features, especially Digital Twin technology. I will open relevant high-priority webpages to gather detailed insights.
    https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000G136SUAR

    There are several detailed articles about Einstein GPT’s features and personalization capabilities.
    https://amarketingautomation.com/blogs/how-einstein-gpt-is-revolutionizing-personalization-in-salesforce-marketing-cloud/

    Technological Foundations: Phoenix Model and NeRF
    Tavus Phoenix NeRF model
    NeRF digital replica Salesforce
    text-to-video neural radiance field Salesforce

    The search brought up several relevant and detailed articles about the Phoenix Model, NeRF, and related technologies.
    https://www.tavus.io/model/phoenix
    https://www.scalevp.com/insights/announcing-our-investment-in-tavus/
    https://docs.tavus.io/sections/replicas/overview

    https://www.salesforce.com/marketing/ai/ai-agents-for-marketing/
    https://www.salesforce.com/marketing/personalization/
    https://www.salesforce.com/marketing/personalization/ai/
    https://www.salesforce.com/sales/ai/
    https://trailhead.salesforce.com/content/learn/modules/artificial-intelligence-for-customer-service
    https://launchpadlab.com/blog/why-salesforce-ai-agents-are-the-future-of-scalable-customer-support/
    https://www.salesforce.com/service/ai/?openLinerExtension=true
    https://salesforcedevops.net/index.php/2025/01/27/from-code-to-conscience-how-salesforce-embeds-ethics-into-enterprise-ai/
    https://www.salesforceairesearch.com/static/ethics/EthicalAIMaturityModel.pdf
    https://investor.salesforce.com/news/default.aspx

    Dreamforce 2025 – Salesforce World Tour digital avatar
    https://www.salesforce.com/dreamforce/
    https://reg.salesforce.com/flow/plus/df25/sessioncatalog/page/catalog

    https://patents.justia.com/company/salesforce
    https://www.greyb.com/blog/salesforce-for-patent-portfolio-management/
    https://insights.greyb.com/salesforce-patents/

Leave a Reply

Your email address will not be published. Required fields are marked *