Authors: Sheldon Lo A Njoe, Spectro Cloud; Nicolas Ferrao, Airbus Defence and Space
Software supply chain attacks are here to stay
For years, cybersecurity strategies were built around a simple idea: if you can protect the perimeter of your infrastructure and harden the systems, you’ll be able to keep attackers out.
Firewalls, network segmentation, endpoint hardening, and vulnerability patching defined the defensive posture of most organizations. If the perimeter held, the system was assumed to be safe.
But that model relied on one critical assumption: that whatever runs inside the perimeter can be trusted. That assumption no longer holds.
In December 2020, security teams across 18,000 organizations made a chilling discovery: the software update they had carefully downloaded, verified, and installed from their trusted vendor, SolarWinds, contained a backdoor planted by a nation-state adversary. No firewall was bypassed. No vulnerability was exploited. The malicious code arrived through the front door, signed, packaged, and delivered as a routine update.
Two years later, the xz Utils incident revealed something even more unsettling. A patient attacker spent years building trust with the maintainer of an obscure (but widely used) compression library, contributing legitimate code improvements before quietly inserting a sophisticated backdoor targeting SSH across nearly every major Linux distribution. It was caught by a single engineer who noticed a half-second delay in SSH logins, days before it would have shipped to millions of systems.
These were not isolated events (and in fact we see more every month). They were signals of a fundamental shift in how adversaries operate. Attackers no longer need to break in… they are being invited in.
The invisible foundation beneath your applications
For anyone building on modern cloud-native infrastructure, from containerized workloads to distributed systems and CI/CD-driven delivery, this shift must fundamentally change how we approach the work we do.
That’s because there is an uncomfortable truth most teams don’t talk about: the software running in your production environments is overwhelmingly not made from your code.
A typical containerized application consists of maybe 10–20% custom business logic. The remaining 80–90% is a towering stack of base images, OS packages, language runtimes, open-source libraries, and transitive dependencies, each with its own maintainers, build systems, and distribution channels.
Every layer in that stack is a potential entry point for an attacker.
Every time you deploy, whether through a Kubernetes manifest, a CI/CD pipeline, or a cloud-native orchestrator, you’re tacitly expressing trust in dozens of independent supply chains simultaneously.
But what evidence do you have that any of those artifacts are what they claim to be?
For most organizations, the honest answer is: none. The question we must ask now is no longer “is my system secure?” but rather “can I prove that what I am running is what I think it is?”
From hardening to verifiable trust: a shift in mindset
For a long time, security strategies focused on hardening, in all its forms:
- Patching known vulnerabilities
- Reducing attack surface
- Enforcing network boundaries
- Scanning images and dependencies
To be absolutely clear, these activities still matter. Leave any weak spot, and attackers will exploit it. But hardening always assumes that you are defending against known weaknesses in trusted software. That assumption is increasingly fragile.
To move away from a simplistic trust model, we recommend evolving beyond isolated controls toward a broader security mindset built on three complementary pillars :
1. Hardening: protect what you know
This is where most organizations are already mature. Regular tasks like vulnerability scanning, patch management, runtime protection and network controls are standard practice and meaningfully reduce exposure to known risks.
But it does not answer the deeper question: Can I trust what I am running in the first place? That’s where the second pillar comes in.
2. Supply chain integrity: trust only what you can prove
Instead of assuming software is legitimate by default, we require cryptographic evidence of its origin and integrity. Controls include:
- Reproducible builds
- Signed artifacts
- Provenance and attestations
- Policy enforcement at deployment
Frameworks like SLSA, under the auspices of the OpenSSF, are gaining popularity and rapidly evolving, building a body of best practices and standards that organisations can follow.
3. Evidence retention: understand what happened
With hardening and supply chain controls, you have taken care of preventative measures. But we’re not done yet. There’s a third, often overlooked, pillar that is just as critical. We need insight and evidence to understand our security posture over time, and be able to answer any question that might be asked in the future.
Consider this scenario:
A vulnerability in an open source project is disclosed today. It affects a component you deployed months ago.
At that time, the vulnerability was unknown, so none of your scans and checks triggered an alert, no anomaly was detected. So now we need to ask: was the vulnerability present back then? And if so, did an attacker exploit it? In many environments, there is no way to answer that with confidence. But we really need the ability to do so with confidence.
Remember, we’re dealing with attackers that are extremely patient. As recent incidents have shown, they can insert malicious code into the supply chain, let it propagate silently, and activate it only when needed… potentially months or even years later.
Without retained evidence (such as build metadata, provenance, and deployment records) you are left with uncertainty instead of answers.
A new security baseline
These three pillars are not independent controls. They form a continuum of trust:
- Hardening reduces risk
- Supply chain integrity ensure authenticity
- Evidence retention enables accountability
Together, they move us toward a more realistic goal of not just securing systems, but being able to prove, at any point in time, what was running.
Focusing on five ways attackers exploit your supply chain
As we’ve seen, supply chain attacks are not theoretical. They target real, specific weaknesses in how software is built, stored, and deployed. Understanding these attack vectors is the first step toward defending against them. There are five in particular that we believe are critical to understand and address.
1. Compromised build infrastructure
The attack: An adversary gains access to your CI/CD pipeline, whether that’s a Jenkins server, a GitHub Actions runner, or a GitLab CI executor, and injects malicious code during the build process. The resulting artifact looks legitimate because it was built by your infrastructure. It just wasn’t built from the code you think it was. And a vulnerability scanner won’t help, because all they do is compare artifacts against databases of known bad code. A custom payload injected during your build won’t match any signature.
Real-world precedent: This is exactly how SolarWinds was compromised. The attackers modified the build process itself, so the malicious code was compiled into official, signed releases.
2. Dependency confusion and substitution
The attack: An attacker publishes a malicious package to a public registry using the same name as an internal package your organization uses. Due to how most package managers resolve dependencies, the public (malicious) version gets pulled instead of your private one. Your build process follows its normal workflow, and the substitution happens silently. The artifact passes all your existing checks because the malicious dependency is a real package… it’s just not the one you intended.
Real-world precedent: The ua-parser-js incident in 2021 saw a legitimate npm package with 8 million weekly downloads hijacked to distribute cryptominers. The event-stream attack in 2018 targeted a specific cryptocurrency wallet through a seemingly innocuous dependency update.
3. Source tampering (time-of-check to time-of-use)
The attack: Code passes review, gets approved, and is merged. But between the moment it’s reviewed and the moment it’s built, an attacker modifies the source. The build system faithfully compiles the tampered code. The artifact is technically “built from the main branch,” but it’s not the code that was reviewed. If your CI pipeline builds from a Git tag like v1.2.3 or a mutable branch reference, an attacker with push access can move that reference to point at different code. The build log says “built from v1.2.3” but the actual content has changed.
Real-world precedent: The tj-actions/changed-files compromise in March 2025 (CVE-2025-30066) is a textbook case. Attackers gained access to the action's repository, injected malicious code, then retroactively updated existing version tags — v1 through v45 — to point at the tampered commit. Over 23,000 repositories referenced the action by mutable tag, meaning their CI/CD pipelines started running code that nobody had reviewed. The fix was simple in theory, painful in practice: pin every GitHub Action to its full 40-character commit SHA, not a tag. Tags are mutable pointers; SHAs aren't.
4. Compromised developer credentials
The attack: An attacker obtains a developer’s credentials through phishing, malware, a leaked token, or a compromised laptop, and uses them to push malicious code, sign artifacts, or publish directly to a registry. If signing keys are stored in software, whether in environment variables, CI secrets, or config files, they can be stolen through the same channels as any other credential.
Real-world precedent: The PHP project’s Git server was compromised in 2021 when attackers pushed malicious commits under the names of known maintainers. npm has experienced multiple account takeover incidents where attackers published malicious versions of popular packages.
5. Registry and distribution attacks
The attack: An attacker compromises the registry itself (Docker Hub, a private registry, an artifact store) or intercepts artifacts in transit. They replace a legitimate image with a tampered one. If you reference images by mutable tags like :latest or :v1.2, the swap is invisible.
Real-world precedent: Docker Hub suffered a breach in 2019 exposing 190,000 accounts. PyPI and npm have both been targeted by attackers uploading malicious packages that typosquat popular library names.
The shift from scanning to proving
At the risk of oversimplifying, much of the security industry’s response to supply chain risk has been… more scanning. Scan containers for CVEs. Scan dependencies for known vulnerabilities. Scan configurations for drift.
As we suggested earlier, scanning is necessary, but it answers the wrong question. It asks: “Does this artifact contain known-bad code?” It does not ask: “Is this artifact what I think it is? Where did it come from? Who built it? From what source code?”
This is where provenance comes in. Instead of searching for known problems after the fact, provenance establishes a cryptographic chain of evidence before an artifact is ever deployed. It shifts the model from reactive detection to proactive verification. No verification, no deployment.
SLSA: A framework for supply chain integrity
The Supply chain Levels for Software Artifacts (SLSA) framework, originally developed at Google and now stewarded by the Open Source Security Foundation (OpenSSF), provides a graduated model for achieving exactly this.
At its core, SLSA mandates that every artifact carries a signed attestation, a cryptographic statement declaring:
- What source code was used (exact repository URL and commit SHA)
- How it was built (the build system, build definition, and inputs)
- Who authorized and executed the build (builder identity)
- When the build occurred (immutable timestamp)
This attestation travels with the artifact. It can be verified anywhere: in your CI pipeline, at your registry, at the Kubernetes admission controller, or on an air-gapped edge node with no network access. The verification doesn’t depend on trusting any intermediary. It depends on cryptography.
Closing the five vectors
Each of the attack vectors we discussed above has a specific SLSA-aligned countermeasure, shown in the table below. These controls provide also layered protection throughout the stages of the software lifecycle, from source commits to deployment.
Attack Vector | Countermeasure | How it works |
1. Compromised build | Signed provenance with builder identity | Build attestation proves which build system produced the artifact. A compromised builder’s attestation won’t match the expected identity. |
2. Dependency confusion | Provenance records resolved dependencies | Attestation includes exact dependency digests. Substituted packages produce different hashes, breaking verification. |
3. Source tampering | Immutable commit SHA in provenance | Attestation links to the exact Git commit. Moving a tag doesn’t change the recorded SHA. |
4. Stolen credentials | Hardware-backed signing keys | Signing keys stored on hardware tokens (like YubiKeys) can’t be extracted or remotely duplicated. Physical presence required. |
5. Registry tampering | Content-addressed references + admission control | Images referenced by digest (@sha256:...) can’t be swapped. Policy engines reject images without valid signatures. |
Making it real: the toolchain
The SLSA framework is tool-agnostic, but the CNCF ecosystem has converged on a practical stack that organizations are deploying today:
Signing and attestation. Tools like Cosign (part of the Sigstore project) enable signing container images and attaching SLSA attestations directly to OCI registries. When combined with hardware security keys, the signing key physically cannot be stolen through a network attack. For a practical example of how this works in production, see how Artifact Studio implements signed, version-controlled downloads.
Policy enforcement. Kubernetes admission controllers like Kyverno act as the last line of defence. They intercept every Pod creation request and validate that the container image has a valid signature and a valid SLSA provenance attestation. No signature? No attestation? The Pod doesn’t run.
GitOps delivery. Tools like ArgoCD and Flux ensure that the desired state in Git is the deployed state in the cluster. Combined with digest-pinned image references and admission policies, this creates a fully auditable, tamper-evident deployment pipeline.
None of these tools work in isolation. For example, there’s no point in an image being signed, if you don’t validate it before deployment. The value is in the chain, each link reinforcing the others.
The edge imperative
If supply chain integrity matters in the cloud, it is existential at the edge — completely non-negotiable. That’s because edge Kubernetes deployments operate under constraints that break many of the assumptions traditional security models rely on:
- Intermittent or nonexistent connectivity means you cannot rely on reaching a central authority.
- Physical access introduces the possibility of direct hardware tampering.
- Limited on-site expertise means security must be automated, not manual.
In these environments, trust cannot depend on the network. It must be embedded in the artifact itself.
A signed container image, carrying its SLSA attestation, becomes a self-contained unit of trust. It can be transported via USB, deployed offline, and still be verified. Verification requires only a public key, no need for a network call.
From connected trust to autonomous verification
Now imagine what happens when we bring together hardware trust anchors, cryptographic attestation and Kubernetes admission control on an edge cluster. Using these technologies in unison, each cluster can independently verify that every workload it runs:
- Originates from reviewed source code
- Was built by an authorized system
- Is signed with a hardware-backed key requiring physical presence
And it can do so in an automated and enforceable way, even when offline.
Platforms that manage Kubernetes across distributed environments, from cloud data centres to factory floors to disconnected field sites, are increasingly building these verification capabilities into their core architecture. Spectro Cloud’s Palette, for example, integrates artifact verification and hardware-rooted Trusted Boot into its edge management stack, recognizing that supply chain integrity is a foundational requirement, not an aftermarket add-on.
The regulatory reality
Software supply chain security is not just a technical evolution emerging from community best practice. It’s becoming a compliance requirement.
The EU Cyber Resilience Act will impose security requirements across the entire software lifecycle for products sold in European markets.
Across the Atlantic, the US Executive Order 14028, on Improving the Nation's Cybersecurity, established SBOM (Software Bill of Materials) requirements for federal suppliers, while NIST SP 800-218 defines secure software development practices that explicitly include build integrity. FedRAMP and CMMC are incorporating supply chain controls into their assessment frameworks.
Organizations that treat supply chain security as optional today will find themselves scrambling when these mandates hit their industry. The technical foundations, signing, attestation, provenance, and policy enforcement, take time to implement well. In other words, starting now doesn’t mean you’re early, you’re just in time.
For a deeper look at how these requirements apply to government and defence contexts, see the PSUG supply chain security white paper.
Where to start
Supply chain security can feel overwhelming, but there is an incremental path you can follow, with clear benefits along the way:
- Sign your images. Start with Cosign. Even keyless signing through Sigstore’s Fulcio CA is a massive step up from unsigned artifacts.
- Generate provenance. Add SLSA attestations to your CI pipeline. Record the source, build inputs, and builder identity for every artifact.
- Enforce at admission. Deploy a policy engine like Kyverno (see Spectro Cloud's blog on Kubernetes policy enforcement with Kyverno and Palette) and require valid signatures before any workload runs in your cluster. Start in audit mode, then move to enforce.
- Pin your references. Stop using mutable tags. Reference every image by its content digest (@sha256:...). This eliminates an entire class of registry-based attacks.
- Harden your signing keys. Move from software keys to hardware-backed keys. A YubiKey costs less than an hour of incident response.
Each step independently improves your security posture. Together, they create a verification chain that addresses every major supply chain attack vector.
The bottom line
Software supply chain attacks are not hypothetical. They are happening now, at increasing scale and sophistication. The defenders who will weather the next SolarWinds, or the next xz Utils, are the ones who stopped relying on implicit trust and started demanding cryptographic proof. The software you trust should be able to prove it deserves that trust.
The tools exist, and they’re already being used widely. The frameworks are maturing, with broad pan-industry participation. The standards are solidifying and are increasingly passing into regulation.
If you’re an IT leader, your next decision is how you’re going to respond. Will you adopt supply chain verification proactively, on your timeline and with your architecture… or reactively after an incident forces your hand?