Audience: Anyone who has basic knowledge on Gen AI and wants to know more about AI Agents and how they are implemented (primarily software architects and developers).
Acronyms and Terms: GenAI (generative AI), GPT (Generative Pretrained Transformer), LLM (Large Language Model), RAG (Retrieval Augmented Generation), For more details on many of these terms, see our AI Glossary.
Understanding AI Agents: What They Are (and What They Are Not)
The year 2025 is set to be a breakout year for AI agents, with a surge of startups and innovative ideas flooding the market. However, to avoid confusion and disappointment later, it's crucial to have a clear understanding of what qualifies as an AI agent—and more importantly, what does not.
When we develop agents, the real important question to ask is what makes the agent valuable which cannot be achieved otherwise. The key to building solutions to real problems is first understanding the problems. Then finding an innovative way of solving it with ML, DL, Gen AI or using simple Python automation is crucial. The focus should be on solving the business problem.
A Simple Rule: When Is It Not an Agent?
If you accomplish a task using ChatGPT (or any LLM or AI tool) with a sequence of manual prompts, it does not qualify as an AI agent. Let’s break this down with a few examples:
Code Review & Documentation
Suppose you ask ChatGPT to review your code, and then you separately ask it to generate documentation. If a so-called "code review agent" is just automating these steps without further action, it’s not truly an agent—it's simply a structured prompt workflow.Knowledge Base Q&A Systems
If you've built a system that allows employees to query company documents using an LLM, that alone does not make it an agent. However, if the system takes an action based on the interaction—such as automatically updating a database, flagging missing information, or initiating a workflow without human intervention—then it starts to resemble a real agent.
What Makes an AI System a True Agent?
An AI agent goes beyond passive interactions. It perceives, decides, and acts autonomously. A true agent:
Interprets a goal and breaks it down into steps
Interacts dynamically with external systems (APIs, databases, applications)
Learns and adapts based on outcomes
Executes actions without needing human follow-ups
The Bottom Line
As we move forward, it’s important to distinguish between advanced LLM-powered tools and real AI agents. Not every AI-driven application qualifies as an agent. The key differentiator is whether the system can independently take meaningful actions based on its reasoning, rather than just responding to a series of user inputs.
With this clarity, we can build true AI agents that drive real business value instead of just repackaging chatbots as "agents."
Agentic Workflows vs. Agents
Please read this blog post from the Claude team (Anthropic Research), which provides a clear explanation of AI agents: “Building Effective Agents”.
Key Takeaways on the Blog:
When LLMs are used to accomplish tasks, they are referred to as agentic systems. These agentic systems are categorized into two types:
Agentic Workflows – Orchestrated through predefined code paths.
Agents – Dynamically self-orchestrate their processes and tool usage without explicit programming.
Deciding when to use agents and when not to is a crucial tradeoff in Gen AI product strategy. The key consideration is balancing increased cost and latency against automation benefits.
Building Blocks of Agents (Augmented LLM)
The fundamental components of LLM-based agentic systems include:
Action Groups – Define task execution steps.
Knowledge Bases – Store structured and unstructured data for retrieval.
Prompt Engineering – Guides retrieval and decision-making processes.
The blog post describes this structure as an "Augmented LLM", which integrates retrieval, memory, and tools.
The blog categorizes various agentic workflows:
Prompt Chaining
Routing
Parallelisation
Orchestrator-Worker Models
Evaluator-Optimizer Models
Diagrammatic Representation of Agents vs Agent Workflows
I tried to capture the minimum elements needed to describe the Agent in this diagram. The brain of this agent is the LLM. So the more advanced the LLM, the more powerful our agent will be. I call the agent and the LLM decides to take necessary actions with all the powerful tools it has, including calling other LLMs if needed.
In case of agentic workflows, I call the LLM and specify what actions to be taken, in series or in parallel or in conditional loop, then I try to achieve the task. I may call multiple LLMs if needed.
You can design workflows, create prompts, and use a knowledge base. However, if the system cannot complete a task or achieve a goal without human intervention, it does not qualify as an agent. But these are also useful agentic workflows which helps us to achieve lots of intelligent automation.
Agentic Workflows vs. Agents in the AWS Bedrock Context
Agents are AI systems where it can be triggered by human request by prompts and calling LLMs, then the LLMs dynamically direct their own processes and tools usages, maintaining control over how they accomplish the tasks as requested by the human.
When implementing an agent in AWS, you need:
Functions – To perform specific tasks which can be implemented using Lambdas.
Knowledge Bases – Please refer to
Create knowledge basein “Retrieval Augmented Generation” for detailed steps on setting up a knowledge base.Task Flows in Prompts – To define how tasks should be executed.
In the AWS documentation AWS Documentation, this diagram is presented as an example of Agents.
It illustrates the basic building blocks of an agent. To determine whether something qualifies as an agent or an agentic workflow, ask yourself the following key question:
"Are you achieving any tasks or goals without explicitly directing them in your code?"
If the answer is "Yes," then it qualifies as an Agent.
If the answer is "No," then ask:
"Are you achieving any tasks or goals using one of the following methods?"
Orchestrating actions through prompt chaining?
Routing Lambdas or functions in an orderly fashion to accomplish a goal?
Making multiple LLM calls to complete the same task and aggregating the results?
Breaking down a task into multiple subtasks, executing them with different LLM calls, and synthesizing the output?
Using an LLM to perform a task and then making another call to evaluate its output?
If the answer is "Yes," then it qualifies as Agentic Workflows.
The diagram below gives the architecture for Agents in AWS documentation. As per definition, this very much qualifies as Agents.
In the above diagram, if you do not have that Lambda (Function to achieve goal) ** which performs a task for you independently, then this whole diagram just supports knowledge base and RAG question answering. Even with all this, if you do not implement and achieve the task without human intervention, then it is not qualified as an Agent.
There are multiple frameworks for implementing agents. If you are using AWS Bedrock APIs, it’s essential to understand their underlying functionality. For instance:
Both the Retrieve API and RetrieveAndGenerate API can be used to fetch content from a knowledge base.
The Retrieve API offers more flexibility, allowing you to control retrieval mechanisms and separately call different LLMs using the retrieved chunks.
“In my opinion, the way you write the code determines whether this diagram represents an Agent or an Agentic Workflow.”
This is a short reminder for me and everyone to be aware of creating useful agents. Please post your feedback - what are the misconceptions about agents you came across when you developed agents?
Conclusion
Building true agents is not just about chaining prompts or orchestrating workflows—it requires a fundamental shift in engineering thinking. Unlike traditional software, where logic is explicitly programmed, agents must autonomously reason, plan, and act. This demands a new discipline: Agent Engineering—a fusion of AI, software architecture, and systems thinking.
Successful agent product development is not just about integrating LLMs; it requires vision, robust engineering practices, and deep domain expertise. As we enter an era flooded with AI startups, only those who master Agent Engineering —the art and science of building self-orchestrating, adaptive systems—will lead the future.
==================================End======================================
**There is a mistake in the AWS documentation
************************************************************************************************************
P.S: While working with AWS Bedrock APIs, I encountered that you will not be able to create agent if you do not have actio group attached to it. That is logically correct. Only the documentation needs to be updated to correct that.
From the documentation “https://docs.aws.amazon.com/bedrock/latest/userguide/agents-how.html”, if you look at the paragraph:
At least one of the following:
Action groups – You define the actions that the agent should perform for the user through providing the following resources):
One of the following schemas to define the parameters that the agent needs to elicit from the user (each action group can use a different schema):
An OpenAPI schema to define the API operations that the agent can invoke to perform its tasks. The OpenAPI schema includes the parameters that need to be elicited from the user.
A function detail schema to define the parameters that the agent can elicit from the user. These parameters can then be used for further orchestration by the agent, or you can set up how to use them in your own application.
(Optional) A Lambda function with the following input and output:
Input – The API operation and/or parameters identified during orchestration.
Output – The response from the API invocation or the response from the function invocation.
Knowledge bases – Associate knowledge bases with an agent. The agent queries the knowledge base for extra context to augment response generation and input into steps of the orchestration process.
The “One of the” is issue here. You need both to create the Agent using bedrock agent APIs. This is a known error in the API. AWS people can confirm if this issue is resolved.








Thank you for calling this out. It seems more and more like people are designing extremely complex workflows, and then slapping the “agent “label on them. But there’s so much more to it than that. Another objection I’ve heard from a number of people is that if a person is involved in the interaction, it can’t be an agent, because agents are supposed to be completely autonomous. My perspective is that when humans are involved in truly agentic interactions, our inputs are data in inputs, not prompts. If we’re not being directive, but we’re simply adding additional information for the agent to consider, then our data point is no different than another system data input. Especially now, I think it’s so important for people to understand the distinctions, so thank you for writing this.
So what's the verdict on Agentforce? They are marketing it as an AI Agent but to me it looks to be more of an agentic workflow. I think they are just leveraging the hype around agents.