TL;DR
- A misconfigured S3 bucket exposed Terraform state files containing live credentials
- Those credentials enabled access to a private GitHub repository
- Secrets stored in that repository granted privileged access to Azure resources
- Azure Key Vault access led to recovery of AWS administrator credentials for a separate account
- No platforms were directly integrated; the linkage existed through reused and over-privileged secrets
- Static, long-lived credentials allowed the compromise to expand across cloud boundaries
- Supporting artefacts such as state files and repositories proved as sensitive as production systems
- Once secrets leaked, architectural separation between cloud environments no longer held
Introduction
In practice, it is still hard to keep secrets safe in the cloud. All major cloud service providers have managed secrets solutions, but they only work if secrets are added, stored, and used correctly. In the real world, credentials, API keys, and tokens still tend to leak through everyday operational shortcuts instead of complicated failures.
During cloud security testing, it is common to encounter secrets in places they were never intended to live. Object storage, environment variables, virtual machine metadata, Infrastructure as Code state files, and source code repositories frequently contain credentials that were added for convenience and never revisited. Once exposed, these secrets often provide far more access than originally expected.
This case shows how a single exposed secret created a chain of compromise across multiple cloud platforms.
A misconfigured S3 bucket and an unexpected trust chain
We found an Amazon S3 bucket with Terraform state files during an AWS assessment. The bucket’s access controls let people who shouldn’t have access see the full contents of the state files. Terraform state often has sensitive information in it, and in this case it had the login information for an automated GitHub app.
After logging in to that private GitHub repository with those credentials, we found even more secrets in that repository. This included the login information for a privileged Azure service principal. This service principal could get to Azure resources and get more secrets from Azure Key Vault.
Among those secrets was an AWS access key associated with an administrator role in a separate AWS account. At that point, a misconfigured S3 bucket had effectively become a bridge between GitHub, Azure, and multiple AWS environments.
None of these platforms were directly connected by design. The linkage existed purely because secrets were reused, over-privileged, and stored in places where they could be recovered once a single boundary was crossed.
How these failures usually happen
This kind of cascade is rarely the result of an exotic attack technique. It is more often the outcome of several ordinary decisions compounding over time:
- Secrets frequently end up stored in locations that are easy to access programmatically but difficult to audit properly.
- Terraform state files, for example, are often treated as an internal implementation detail rather than sensitive artefacts.
- Source code repositories accumulate credentials added during development and never removed.
- Environment variables and VM user data persist long after their original purpose has passed.
Static credentials amplify the impact. Long lived API keys and service principals give attackers time to explore environments at their own pace. Even when accounts are notionally separate, shared credentials reintroduce trust relationships that were never formally designed.
Access controls are another weak point. Storage services, logs, and state files are often excluded from the same level of scrutiny applied to production systems. When those controls are misapplied, secrets become visible to anyone who can enumerate the resource.
Operational impact
What makes this class of issue particularly dangerous is how quickly the scope expands. An initial exposure that looks limited to a single cloud account can turn into a multi platform incident without triggering obvious alarms. Each step relies on legitimate credentials, so activity blends into normal operations.
From an incident response perspective, containment becomes harder as the number of affected systems grows. Revoking one credential is rarely enough. Teams must understand where that secret was reused, what it could access, and whether it enabled further credential discovery. In multi cloud environments, that investigation often crosses organisational and tooling boundaries.
Lessons from testing
Across cloud assessments, the same patterns appear repeatedly. Secrets are treated as configuration rather than high value assets. Tooling assumptions leak into production environments. Trust relationships are created implicitly through credential reuse rather than explicitly through architecture.
Where small changes reduce risk
The technical controls to prevent this are well understood, but the failures tend to be procedural and operational. Secrets live longer than intended. Their blast radius is underestimated. Supporting artefacts such as state files and repositories receive less protection than the systems they describe.
The controls already exist, but they are applied consistently and treated as part of normal engineering work rather than specialist security tasks.
Putting secrets back behind a boundary
Centralised cloud native secrets management services such as AWS Secrets Manager, Azure Key Vault, or Google Cloud Secret Manager make it easier to understand where credentials live and who can access them, reducing the likelihood of secrets accidentally being exposed. More importantly, it creates a clear boundary between infrastructure definition and sensitive material, rather than allowing secrets to drift into state files, repositories, or user data.
Why short-lived credentials matter
Short lived credentials also change the economics of an attack. When secrets are generated dynamically and revoked automatically, attackers lose the luxury of time. Even if a secret is recovered, its usefulness may already be expiring. In contrast, static keys and long lived service principals extend trust far beyond the moment they were created.
How secrets reach applications
How secrets are delivered to applications matters just as much as where they are stored. Environments that inject secrets at runtime through APIs or service level integrations tend to leak less over time than those that embed credentials directly in configuration files or code. Static placement makes secrets easy to copy, forget, and reuse in places they were never intended to exist.
Finding leaks before attackers do
Detection plays a role as well. Regular scanning of repositories, logs, and storage services for exposed credentials often uncovers issues long before an attacker would. These findings are rarely surprising. They are usually remnants of earlier work that no longer has an owner.
Constraining the blast radius
Finally, access to secrets themselves is often broader than necessary. Where secrets management systems enforce tight permissions and automated rotation, the impact of a single leak is naturally constrained. Where access is permissive and rotation is manual or ad hoc, secrets tend to accumulate power over time.