Researchers outline eight AWS Bedrock attack paths that could expose enterprise data and AI workflows
Security researchers say Amazon Bedrock environments can expose far more than model outputs when identities hold too many permissions. In new research, XM Cyber mapped eight attack paths that target the infrastructure around Bedrock, including logging, knowledge bases, agents, flows, guardrails, and managed prompts. The key point is simple: attackers do not need to break the model itself if they can abuse the services, permissions, and integrations that surround it.
That warning matters because Bedrock often sits close to sensitive business systems. AWS positions Bedrock as a platform for building generative AI applications with foundation models, knowledge bases, agents, flows, and centralized prompt management. AWS also documents APIs that let administrators configure model invocation logging, retrieve knowledge base details, update agents, change flows, modify guardrails, and edit prompts. In the wrong hands, those same capabilities can become attack surfaces.
The research focuses on a pattern many cloud security teams already know well. A single over-privileged identity can turn a helpful AI feature into a path for data theft, workflow tampering, or stealthy persistence. XM Cyber says it validated eight attack vectors inside Bedrock environments, each starting from permissions and configuration mistakes rather than from a flaw in the model layer itself.
What the eight attack paths target
According to the research summary, the eight vectors break down into these areas:
- model invocation logs
- knowledge base data sources
- knowledge base data stores
- agents through direct changes
- agents through supporting infrastructure
- flows
- guardrails
- managed prompts
AWS documentation shows why those categories matter. Bedrock supports configurable model invocation logging to CloudWatch Logs and S3, knowledge base metadata retrieval through the GetKnowledgeBase API, flow editing through UpdateFlow, prompt editing through UpdatePrompt, and guardrail changes through UpdateGuardrail. Those are legitimate administrative features, but they also represent high-value control points.
The risk starts with logging and data access
One of the clearest concerns involves model invocation logging. AWS says Bedrock can deliver text, image, embedding, and video invocation data to S3 and CloudWatch Logs, and administrators can set that behavior through the PutModelInvocationLoggingConfiguration API. XM Cyber warns that if an attacker can read the log bucket, change the logging destination, or delete log objects and streams, they can quietly capture prompts or erase signs of malicious activity.
Knowledge bases create another major exposure point. AWS documents that GetKnowledgeBase returns metadata about a knowledge base, including storage configuration details. XM Cyber says attackers who gain access to data sources or the secrets behind external integrations may bypass the model entirely and pull enterprise data straight from the underlying systems. The researchers also warn that if an attacker reaches stored credentials for connected SaaS or vector database services, the compromise can spread beyond Bedrock into broader cloud or identity infrastructure.
Agents, flows, and prompts can become control points
AWS exposes dedicated APIs for updating agents, flows, guardrails, and prompts. XM Cyber argues that this creates a powerful set of administrative choke points. An attacker with agent update permissions could change an agent’s instructions or attach a malicious executor. Flow update permissions could let an intruder reroute sensitive inputs and outputs through a hostile node or weaken logic that enforces business rules. Prompt update rights could poison centrally managed prompt templates across multiple applications without changing app code or triggering a redeploy.
The same logic applies to guardrails. AWS says UpdateGuardrail can change blocked topics, messaging, and safety settings. XM Cyber warns that weakening or deleting guardrails could make Bedrock applications more vulnerable to prompt injection, toxic output, or exposure of sensitive data that the environment previously filtered or redacted.
Why this matters for security teams
This research does not describe a single active exploit chain or a single AWS vulnerability. Instead, it maps how normal platform features can turn risky when IAM permissions, connected services, and trust boundaries are too broad. That distinction matters. Security teams do not just need to secure the model call. They need to secure the entire AI operating environment around it.
In practice, that means Bedrock should be treated like any other privileged control plane. If it can query enterprise knowledge, trigger Lambda-backed actions, or operate through agents and flows, it deserves the same level of access review, logging protection, network control, and secret management that teams apply to production cloud services. That conclusion aligns with AWS’s own service design, which exposes Bedrock actions through IAM-controlled APIs rather than through a sealed black box.
Eight attack vectors at a glance
| Attack area | What researchers say an attacker could do |
|---|---|
| Model invocation logs | Read prompts from logs, redirect logs, or delete evidence |
| Knowledge base data source | Pull raw source data or steal integration credentials |
| Knowledge base data store | Access vector or database backends through exposed credentials |
| Agent attacks, direct | Rewrite prompts or attach malicious executors |
| Agent attacks, indirect | Tamper with Lambda-backed tooling used by agents |
| Flow attacks | Insert sidecar nodes, bypass logic, or change encryption paths |
| Guardrail attacks | Weaken or remove content and safety controls |
| Managed prompt attacks | Poison shared prompts across apps without redeployment |
Source: XM Cyber research summary shared in the sample.
What defenders should do now
- Audit IAM permissions tied to Bedrock identities, especially update and logging-related actions.
- Lock down model invocation logging destinations in S3 and CloudWatch Logs and alert on changes.
- Review who can retrieve knowledge base details and who can access the backing data sources and secrets.
- Treat Bedrock agents, flows, guardrails, and managed prompts as production control-plane assets, not just AI features.
- Pin down Lambda, storage, and external SaaS integrations that Bedrock can reach so an identity compromise cannot fan out into the rest of the environment. This point follows from the research summary’s focus on integrations and reachable infrastructure.
FAQ
They mapped eight Bedrock attack paths that abuse permissions, integrations, and configuration controls around the platform rather than attacking the model itself.
Not in the sample you shared. The research describes attack vectors and risk paths inside Bedrock environments, not a single vendor-confirmed software flaw with a CVE.
AWS allows Bedrock invocation data to flow into S3 and CloudWatch Logs. If attackers can read, redirect, or delete those logs, they may capture sensitive prompts or hide activity.
Because AWS provides APIs to update them, and centrally managed agents, flows, and prompts can influence many downstream AI actions at once.
Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more
User forum
0 messages