Introduction
With the number of AI services available from AWS, it can be difficult to understand which one to use for a given use case. In this post, I’ll demystify the various AI services available from AWS and help you understand which one to use for a given use case. I’ll also provide a summary of the services and their capabilities.
AWS Services Discussed
Specifically in this post, I’ll be covering the following services:
Summary of Services
Let this serve as a summary of the services and their capabilities. I’ll be updating this post as I learn more about the services and their capabilities.
Dimension | Bedrock Strands (Strands Agents SDK) | Bedrock Agents | Bedrock AgentCore |
---|---|---|---|
What it is | Open-source Python SDK for building GenAI agents. Minimal code to define model, tools, prompt. | Fully managed AWS service to create/configure AI agents via console or API. Orchestration handled by AWS. | Managed runtime platform (PaaS) for deploying and scaling custom/open-source AI agents with isolation & built-in tools. |
Who it’s for | Developers who want lightweight, flexible agent dev with minimal boilerplate, but still coding in Python. | Teams who want to quickly deploy enterprise assistants without building orchestration loops. | Teams who already have or need a custom agent (Strands, LangChain, etc.) and want AWS to handle ops, scaling, security. |
Approach | Low-code, code-first: write a few lines (prompt + tools). Model does planning/reasoning. | Config-driven: define instructions, action schemas (APIs/Lambdas), attach KBs. Service orchestrates multi-step flow. | Ops/deployment-centric: you own agent code; AWS provides runtime, scaling, isolation, extra services (browser, code exec, memory). |
Flexibility | High: choose any model (Bedrock, OpenAI, local), add custom tools, customize behavior in code. | Moderate: limited to Bedrock models; supports actions (Lambda/OpenAPI), KBs, guardrails. Customization via prompts/templates. | Highest: any framework, model, or logic. Optional AWS-provided tools. Can mix Bedrock + external APIs. |
Integration with AWS | Strong: native integration with AWS SDKs, Lambda, Step Functions, S3, DynamoDB, etc. | Built-in: console setup for Lambdas, Knowledge Bases, Guardrails, CloudWatch logging. | Very broad: direct SDK/API calls, VPC networking, Identity for auth, Gateway for tools, Memory for persistence. |
Integration with external APIs | Easy: wrap any API call as a Python tool. | Supported: define OpenAPI schemas or wrap in Lambda. | Direct: call any API in code, or register APIs as Gateway tools for agent discovery. |
Identity / Auth | DIY (you handle auth flows in code). | Basic (pass user context manually if needed). | Built-in: integrates with Cognito, Okta, OIDC; securely manage API keys/tokens per user/session. |
Memory | Manual (developer builds memory store and recall logic). | Built-in short-term conversation history per session. | Managed Memory service (short- and long-term, configurable strategies, retrieval APIs). |
Scaling | Depends on deployment (Lambda, Fargate, EC2, etc.). You design scaling. | Fully managed, auto-scales with requests. | Fully managed, microVM per session isolation, scales to thousands of sessions in seconds. |
Pricing | Free SDK. Pay for compute + Bedrock (or other model) usage + any AWS services called. | No separate agent fee. Pay for Bedrock model usage, Lambda execution, KB storage/search, guardrails. | Granular pay-as-you-go: vCPU-sec, GB-sec, tool usage (browser/code), memory events, Gateway calls, plus any model/API usage. |
Dev effort | Low: minimal code for agent loop; just define prompt + tools. Debugging in Python. | Low: configure in console; some coding for actions (Lambdas). No orchestration code. | Medium: must build/bring your agent code, but no infra management. Some DevOps for deployment/monitoring. |
Best for | Quick, flexible code-light agent development by engineers. | Fast setup of enterprise chatbots/assistants with APIs + knowledge bases. | Productionizing advanced/custom agents needing scalability, isolation, auth, external APIs. |
Example use case | DevOps helper that provisions AWS resources from natural-language commands. | HR assistant that answers FAQs and files PTO via Lambda API. | Financial research agent using LangChain + Bedrock + external APIs, scaled to 1000s of analysts securely. |
Bottom line for your team
- Strands → Build agents in Python with minimal code. You define the prompt + tools, and the model handles orchestration.
- Bedrock Agents → Configure and run managed agents quickly. AWS handles orchestration, scaling, and multi-step reasoning.
- AgentCore → Deploy and scale any custom agent. Provides secure runtime, built-in tools (browser, code exec, memory, identity), and session isolation.