SLSA is a specification for describing and incrementally improving supply chain
security, established by industry consensus. It is organized into a series of
levels that describe increasing security guarantees.
This is the Working Draft of what the next version of the SLSA
specification might be. It defines several SLSA levels and tracks, as
well as recommended attestation formats, including provenance.
Understanding SLSA
This subsection provides an overview of SLSA, how it helps protect against common supply chain attacks, and common use cases. If you’re new to SLSA or supply chain security, start here.
Core specification
This subsection describes SLSA’s security levels and requirements for each track. If you want to achieve a particular SLSA level, these are the requirements you’ll need to meet.
This subsection includes the concrete schemas for SLSA attestations. The Provenance and VSA formats are recommended, but not required by the specification.
What's new
This document describes the major changes brought by this Working
Draft relative to the prior release, v1.0.
Summary of changes
- Clarify that attestation format schema are informative and the
specification texts (SLSA and in-toto attestation) are the canonical
source of definitions.
- Add procedure for verifying VSAs.
- Add verifier metadata to VSA format.
- It is now recommended that the
digest
field of ResourceDescriptor
is
set in a Verification Summary Attestation’s (VSA) policy
object.
- Further refine the threat model.
- Add draft of SLSA Source Track.
- Add draft of [SLSA Build Environment Track].
About SLSA
This section is an introduction to SLSA and its concepts. If you’re new
to SLSA, start here!
What is SLSA?
Supply-chain Levels for Software Artifacts, or SLSA (“salsa”), is a set of incrementally adoptable guidelines for supply chain security,
established by industry consensus. The specification set by SLSA is useful for
both software producers and consumers: producers can follow SLSA’s guidelines to
make their software supply chain more secure, and consumers can use SLSA to make
decisions about whether to trust a software package.
SLSA offers:
- A common vocabulary to talk about software supply chain security
- A way to secure your incoming supply chain by evaluating the trustworthiness of the artifacts you consume
- An actionable checklist to improve your own software’s security
- A way to measure your efforts toward compliance with the Secure Software Development Framework (SSDF)
Why SLSA is needed
High profile attacks like those against SolarWinds or Codecov have exposed the kind of supply
chain integrity weaknesses that may go unnoticed, yet quickly become very
public, disruptive, and costly in today’s environment when exploited. They’ve
also shown that there are inherent risks not just in code itself, but at
multiple points in the complex process of getting that code into software
systems—that is, in the software supply chain. Since these attacks are on
the rise and show no sign of decreasing, a universal framework for hardening the
software supply chain is needed, as affirmed by the
U.S. Executive Order on Improving the Nation’s Cybersecurity.
Security techniques for vulnerability detection and analysis of source code are
essential, but are not enough on their own. Even after fuzzing or vulnerability
scanning is completed, changes to code can happen, whether unintentionally or
from insider threats or compromised accounts. Risk for code modification exists at
each link in a typical software supply chain, from source to build through
packaging and distribution. Any weaknesses in the supply chain undermine
confidence in whether the code that you run is actually the code that you
scanned.
SLSA is designed to support automation that tracks code handling from source
to binary, protecting against tampering regardless of the complexity
of the software supply chain. As a result, SLSA increases trust that the
analysis and review performed on source code can be assumed to still apply to
the binary consumed after the build and distribution process.
SLSA in layperson’s terms
There has been a lot of discussion about the need for “ingredient labels” for
software—a “software bill of materials” (SBOM) that tells users what is in their
software. Building off this analogy, SLSA can be thought of as all the food
safety handling guidelines that make an ingredient list credible. From standards
for clean factory environments so contaminants aren’t introduced in packaging
plants, to the requirement for tamper-proof seals on lids that ensure nobody
changes the contents of items sitting on grocery store shelves, the entire food
safety framework ensures that consumers can trust that the ingredient list
matches what’s actually in the package they buy.
Likewise, the SLSA framework provides this trust with guidelines and
tamper-resistant evidence for securing each step of the software production
process. That means you know not only that nothing unexpected was added to the
software product, but also that the ingredient label itself wasn’t tampered with
and accurately reflects the software contents. In this way, SLSA helps protect
against the risk of:
- Code modification (by adding a tamper-evident “seal” to code after
source control)
- Uploaded artifacts that were not built by the expected CI/CD platform (by marking
artifacts with a factory “stamp” that shows which build platform created it)
- Threats against the build platform (by providing “manufacturing facility”
best practices for build platform services)
For more exploration of this analogy, see the blog post
SLSA + SBOM: Accelerating SBOM success with the help of SLSA.
Who is SLSA for?
In short: everyone involved in producing and consuming software, or providing
infrastructure for software.
Software producers, such as an open source project, a software vendor, or a
team writing first-party code for use within the same company. SLSA gives you
protection against tampering along the supply chain to your consumers, both
reducing insider risk and increasing confidence that the software you produce
reaches your consumers as you intended.
Software consumers, such as a development team using open source packages, a
government agency using vendored software, or a CISO judging organizational
risk. SLSA gives you a way to judge the security practices of the software you
rely on and be sure that what you receive is what you expected.
Infrastructure providers, who provide infrastructure such as an ecosystem
package manager, build platform, or CI/CD platform. As the bridge between the
producers and consumers, your adoption of SLSA enables a secure software supply
chain between them.
How SLSA works
We talk about SLSA in terms of tracks and levels.
A SLSA track focuses on a particular aspect of a supply chain, such as the Build
Track.
Within each track, ascending levels indicate increasingly hardened security
practices. Higher levels provide better guarantees against supply chain threats,
but come at higher implementation costs. Lower SLSA levels are designed to be
easier to adopt, but with only modest security guarantees. SLSA 0 is sometimes
used to refer to software that doesn’t yet meet any SLSA level. Currently, the
SLSA Build Track encompasses Levels 1 through 3, but we envision higher levels
to be possible in future revisions.
The combination of tracks and levels offers an easy way to discuss whether
software meets a specific set of requirements. By referring to an artifact as
meeting SLSA Build Level 3, for example, you’re indicating in one phrase that
the software artifact was built following a set of security practices that
industry leaders agree protect against particular supply chain compromises.
What SLSA doesn’t cover
SLSA is only one part of a thorough approach to supply chain security. There
are several areas outside SLSA’s current framework that are nevertheless
important to consider together with SLSA such as:
- Code quality: SLSA does not tell you whether the developers writing the
source code followed secure coding practices.
- Producer trust: SLSA does not address organizations that intentionally
produce malicious software, but it can reduce insider risks within an
organization you trust.
- Transitive trust for dependencies: the SLSA level of an artifact is
independent of the level of its dependencies. You can use SLSA recursively to
also judge an artifact’s dependencies on their own, but there is
currently no single SLSA level that applies to both an artifact and its
transitive dependencies together. For a more detailed explanation of why,
see the FAQ.
Supply chain threats
Attacks can occur at every link in a typical software supply chain, and these
kinds of attacks are increasingly public, disruptive, and costly in today’s
environment.
This section is an introduction to possible attacks throughout the supply chain and how
SLSA could help. For a more technical discussion, see Threats & mitigations.
Summary

Note that SLSA does not currently address all of the threats presented here.
See Threats & mitigations for what is currently addressed and
Terminology for an explanation of the supply chain model.
SLSA’s primary focus is supply chain integrity, with a secondary focus on
availability. Integrity means protection against tampering or unauthorized
modification at any stage of the software lifecycle. Within SLSA, we divide
integrity into source integrity vs build integrity.
Source integrity: Ensure that all changes to the source code reflect the
intent of the software producer. Intent of an organization is difficult to
define, so SLSA is expected to approximate this as approval from two authorized
representatives.
Build integrity: Ensure that the package is built from the correct,
unmodified sources and dependencies according to the build recipe defined by the
software producer, and that artifacts are not modified as they pass between
development stages.
Availability: Ensure that the package can continue to be built and
maintained in the future, and that all code and change history is available for
investigations and incident response.
Real-world examples
Many recent high-profile attacks were consequences of supply chain integrity vulnerabilities, and could have been prevented by SLSA’s framework. For example:
| Threats from
| Known example
| How SLSA could help
|
A
| Producer
| SpySheriff: Software producer purports to offer anti-spyware software, but that software is actually malicious.
| SLSA does not directly address this threat but could make it easier to discover malicious behavior in open source software, by forcing it into the publicly available source code.
For close source software SLSA does not provide any solutions for malicious producers.
|
B
| Authoring & reviewing
| SushiSwap: Contractor with repository access pushed a malicious commit redirecting cryptocurrency to themself.
| Two-person review could have caught the unauthorized change.
|
C
| Source code management
| PHP: Attacker compromised PHP's self-hosted git server and injected two malicious commits.
| A better-protected source code system would have been a much harder target for the attackers.
|
D
| External build parameters
| The Great Suspender: Attacker published software that was not built from the purported sources.
| A SLSA-compliant build server would have produced provenance identifying the actual sources used, allowing consumers to detect such tampering.
|
E
| Build process
| SolarWinds: Attacker compromised the build platform and installed an implant that injected malicious behavior during each build.
| Higher SLSA levels require stronger security controls for the build platform, making it more difficult to compromise and gain persistence.
|
F
| Artifact publication
| CodeCov: Attacker used leaked credentials to upload a malicious artifact to a GCS bucket, from which users download directly.
| Provenance of the artifact in the GCS bucket would have shown that the artifact was not built in the expected manner from the expected source repo.
|
G
| Distribution channel
| Attacks on Package Mirrors: Researcher ran mirrors for several popular package registries, which could have been used to serve malicious packages.
| Similar to above (F), provenance of the malicious artifacts would have shown that they were not built as expected or from the expected source repo.
|
H
| Package selection
| Browserify typosquatting: Attacker uploaded a malicious package with a similar name as the original.
| SLSA does not directly address this threat, but provenance linking back to source control can enable and enhance other solutions.
|
I
| Usage
| Default credentials: Attacker could leverage default credentials to access sensitive data.
| SLSA does not address this threat.
|
N/A
| Dependency threats (i.e. A-H, recursively)
| event-stream: Attacker added an innocuous dependency and then later updated the dependency to add malicious behavior. The update did not match the code submitted to GitHub (i.e. attack F).
| Applying SLSA recursively to all dependencies would prevent this particular vector, because the provenance would indicate that it either wasn't built from a proper builder or that the source did not come from GitHub.
|
| Availability threat
| Known example
| How SLSA could help
|
N/A
| Dependency becomes unavailable
| Mimemagic: Producer intentionally removes package or version of package from repository with no warning. Network errors or service outages may also make packages unavailable temporarily.
| SLSA does not directly address this threat.
|
A SLSA level helps give consumers confidence that software has not been tampered
with and can be securely traced back to source—something that is difficult, if
not impossible, to do with most software today.
Use cases
SLSA protects against tampering during the software supply chain, but how?
The answer depends on the use case in which SLSA is applied. Below
describe the three main use cases for SLSA.
Applications of SLSA
First party
Reducing risk within an organization from insiders and compromised accounts
In its simplest form, SLSA can be used entirely within an organization to reduce
risk from internal sources. This is the easiest case in which to apply SLSA
because there is no need to transfer trust across organizational boundaries.
Example ways an organization might use SLSA internally:
- A small company or team uses SLSA to ensure that the code being deployed to
production in binary form is the same one that was tested and reviewed in
source form.
- A large company uses SLSA to require two person review for every production
change, scalably across hundreds or thousands of employees/teams.
- An open source project uses SLSA to ensure that compromised credentials
cannot be abused to release an unofficial package to a package registry.
Case study: Google (Binary Authorization for Borg)
Open source
Reducing risk from consuming open source software
SLSA can also be used to reduce risk for consumers of open source software. The
focus here is to map built packages back to their canonical sources and
dependencies. In this way, consumers need only trust a small number of secure
build platforms rather than the many thousands of developers with upload
permissions across various packages.
Example ways an open source ecosystem might use SLSA to protect users:
- At upload time, the package registry rejects the package if it was not built
from the canonical source repository.
- At download time, the packaging client rejects the package if it was not
built by a trusted builder.
Case study: SUSE
Vendors
Reducing risk from consuming vendor provided software and services
Finally, SLSA can be used to reduce risk for consumers of vendor provided
software and services. Unlike open source, there is no canonical source
repository to map to, so instead the focus is on trustworthiness of claims made
by the vendor.
Example ways a consumer might use SLSA for vendor provided software:
- Prefer vendors who make SLSA claims and back them up with credible evidence.
- Require a vendor to implement SLSA as part of a contract.
- Require a vendor to be SLSA certified from a trusted third-party auditor.
Guiding principles
This section is an introduction to the guiding principles behind SLSA’s design
decisions.
Simple levels with clear outcomes
Use levels to communicate security state and to encourage a large
population to improve its security stance over time. When necessary, split
levels into separate tracks to recognize progress in unrelated security areas.
Reasoning: Levels simplify how to think about security by boiling a complex
topic into an easy-to-understand number. It is clear that level N is better than
level N-1, even to someone with passing familiarity. This provides a convenient
way to describe current security state as well as a natural path to improvement.
Guidelines:
-
Define levels in terms of concrete security outcomes. Each level should
have clear and meaningful security value, such as stopping a particular
class of threats. Levels should represent security milestones, not just
incremental progress. Give each level an easy-to-remember mnemonic, such as
“Provenance exists”.
-
Balance level granularity. Too many levels makes SLSA hard to understand
and remember; too few makes each level hard to achieve. Collapse levels
until each step requires a non-trivial but manageable amount of work to
implement. Separate levels if they require significant work from multiple
distinct parties, such as infrastructure work plus user behavior changes, so
long as the intermediate level still has some security value (prior bullet).
-
Use tracks sparingly. Additional tracks add extra complexity to SLSA, so
a new track should be seen as a last resort. Each track should have a clear,
distinct purpose with a crisply defined objective, such as trustworthy
provenance for the Build track. As a rule of thumb, a
new track may be warranted if it addresses threats unrelated to another
track. Try to avoid tracks that sound confusingly similar in either name or
objective.
Establish trust in a small number of platforms and systems—such as change management, build,
and packaging platforms—and then automatically verify the many artifacts
produced by those platforms.
Reasoning: Trusted computing bases are unavoidable—there’s no choice but
to trust some platforms. Hardening and verifying platforms is difficult and
expensive manual work, and each trusted platform expands the attack surface of the
supply chain. Verifying that an artifact is produced by a trusted platform,
though, is easy to automate.
To simultaneously scale and reduce attack surfaces, it is most efficient to trust a limited
numbers of platforms and then automate verification of the artifacts produced by those platforms.
The attack surface and work to establish trust does not scale with the number of artifacts produced,
as happens when artifacts each use a different trusted platform.
Benefits: Allows SLSA to scale to entire ecosystems or organizations with a near-constant
amount of central work.
Example
A security engineer analyzes the architecture and implementation of a build
platform to ensure that it meets the SLSA Build Track requirements. Following the
analysis, the public keys used by the build platform to sign provenance are
“trusted” up to the given SLSA level. Downstream platforms verify the provenance
signed by the public key to automatically determine that an artifact meets the
SLSA level.
A corollary to this principle is to minimize the size of the trusted computing
base. Every platform we trust adds attack surface and increases the need for
manual security analysis. Where possible:
- Concentrate trust in shared infrastructure. For example, instead of each
team within an organization maintaining their own build platform, use a
shared build platform. Hardening work can be shared across all teams.
- Remove the need to trust components. For example, use end-to-end signing
to avoid the need to trust intermediate distribution platforms.
Trust code, not individuals
Securely trace all software back to source code rather than trust individuals who have write access to package registries.
Reasoning: Code is static and analyzable. People, on the other hand, are prone to mistakes,
credential compromise, and sometimes malicious action.
Benefits: Removes the possibility for a trusted individual—or an
attacker abusing compromised credentials—to tamper with source code
after it has been committed.
Prefer attestations over inferences
Require explicit attestations about an artifact’s provenance; do not infer
security properties from a platform’s configurations.
Reasoning: Theoretically, access control can be configured so that the only path from
source to release is through the official channels: the CI/CD platform pulls only
from the proper source, package registry allows access only to the CI/CD platform,
and so on. We might infer that we can trust artifacts produced by these platforms
based on the platform’s configuration.
In practice, though, these configurations are almost impossible to get right and
keep right. There are often over-provisioning, confused deputy problems, or
mistakes. Even if a platform is configured properly at one moment, it might not
stay that way, and humans almost always end up getting in the access control
lists.
Access control is still important, but SLSA goes further to provide defense in depth: it requires proof in
the form of attestations that the package was built correctly.
Benefits: The attestation removes intermediate platforms from the trust base and ensures that
individuals who are accidentally granted access do not have sufficient permission to tamper with the package.
Support anonymous and pseudonymous contributions
SLSA supports anonymous and pseudonymous ‘identities’ within the software supply chain.
While organizations that implement SLSA may choose otherwise, SLSA itself does not require,
or encourage, participants to be mapped to their legal identities.
Nothing in this specification should be taken to mean that SLSA requires participants to
to reveal their legal identity.
Reasoning: SLSA uses identities for multiple purposes: as a trust anchor for attestations
(i.e. who or what is making this claim and do I trust it to do so) or for attributing actions
to an actor. Choice of identification technology is left to the organization and technical
stacks implementing the SLSA standards.
When identities are strongly authenticated and used consistently they can often be leveraged
for both of these purposes without requiring them to be mapped to legal identities.
This reflects how identities are often used in open source where legal name means much less
to projects than the history and behavior of a given handle over time does. Meanwhile some
organizations may choose to levy additional requirements on identities. They are free to do
so, but SLSA itself does not require it.
Benefits: By not requiring legal identities SLSA lowers the barriers to its adoption,
enabling all of its other benefits and maintaining support for anonymous and pseudonymous
contribution as has been practiced in the software industry for decades.
Frequently asked questions
Q: Why is SLSA not transitive?
SLSA Build levels only cover the trustworthiness of a single build, with no
requirements about the build levels of transitive dependencies. The reason for
this is to make the problem tractable. If a SLSA Build level required
dependencies to be the same level, then reaching a level would require starting
at the very beginning of the supply chain and working forward. This is
backwards, forcing us to work on the least risky component first and blocking
any progress further downstream. By making each artifact’s SLSA rating
independent from one another, it allows parallel progress and prioritization
based on risk. (This is a lesson we learned when deploying other security
controls at scale throughout Google.) We expect SLSA ratings to be composed to
describe a supply chain’s overall security stance, as described in the case
study vision.
Q: What about reproducible builds?
When talking about reproducible builds, there
are two related but distinct concepts: “reproducible” and “verified
reproducible.”
“Reproducible” means that repeating the build with the same inputs results in
bit-for-bit identical output. This property
provides
many
benefits,
including easier debugging, more confident cherry-pick releases, better build
caching and storage efficiency, and accurate dependency tracking.
“Verified reproducible” means using two or more independent build platforms to
corroborate the provenance of a build. In this way, one can create an overall
platform that is more trustworthy than any of the individual components. This is
often
suggested
as a solution to supply chain integrity. Indeed, this is one option to secure
build steps of a supply chain. When designed correctly, such a platform can
satisfy all of the SLSA Build level requirements.
That said, verified reproducible builds are not a complete solution to supply
chain integrity, nor are they practical in all cases:
- Reproducible builds do not address source, dependency, or distribution
threats.
- Reproducers must truly be independent, lest they all be susceptible to the
same attack. For example, if all rebuilders run the same pipeline software,
and that software has a vulnerability that can be triggered by sending a
build request, then an attacker can compromise all rebuilders, violating the
assumption above.
- Some builds cannot easily be made reproducible, as noted above.
- Closed-source reproducible builds require the code owner to either grant
source access to multiple independent rebuilders, which is unacceptable in
many cases, or develop multiple, independent in-house rebuilders, which is
likely prohibitively expensive.
Therefore, SLSA does not require verified reproducible builds directly. Instead,
verified reproducible builds are one option for implementing the requirements.
For more on reproducibility, see
Hermetic, Reproducible, or Verifiable?
Q: How does SLSA relate to in-toto?
in-toto is a framework to secure software supply chains
hosted at the Cloud Native Computing Foundation. The in-toto
specification provides a generalized workflow to secure different steps in a
software supply chain. The SLSA specification recommends
in-toto attestations as the vehicle to
express Provenance and other attributes of software supply chains. Thus, in-toto
can be thought of as the unopinionated layer to express information pertaining
to a software supply chain, and SLSA as the opinionated layer specifying exactly
what information must be captured in in-toto metadata to achieve the guarantees
of a particular level.
in-toto’s official implementations written in
Go,
Java, and
Rust include support for generating
SLSA Provenance metadata. These APIs are used in other tools generating SLSA
Provenance such as Sigstore’s cosign, the SLSA GitHub Generator, and the in-toto
Jenkins plugin.
Build platform and build system have been used interchangeably in the past. With
the v1.0 specification, however, there has been a unification around the term
platform as indicated in the Terminology. The use of the word
system
still exists related to software and services within the build platform
and to systems outside of a build platform like change management systems.
A build service is a hosted build platform that is often run on shared infrastructure
instead of individuals’ machines and workstations. Its use has also been replaced outside
of the requirements as it relates to the build platform.
Q: Is SLSA the same as TACOS?
No.
Trusted Attestation and Compliance for Open Source (TACOS)
is a framework authored by Tidelift.
Per their website, TACOS is a framework
“for assessing the development practices of open source projects
against a set of secure development standards specified by the (US)
NIST Secure Software Development Framework (SSDF) V1.1” which
“vendors can use to provide self-attestation for the open source components
they rely on.”
In contrast, SLSA is a community-developed framework—including
adoptable guidelines for securing a software supply chain and
mechanism to evaluate the trustworthiness of artifacts you consume—that
is part of the Open Source Security Foundation (OpenSSF).
Q: How does SLSA and SLSA Provenance relate to SBOM?
Software Bill of Materials (SBOM) are a frequently recommended tool for
increased software supply chain rigor. An SBOM is typically focused on
understanding software in order to evaluate risk through known vulnerabilities
and license compliance. These use-cases require fine-grained and timely data
which can be refined to improve signal-to-noise ratio.
SLSA Provenance and the Build track are focused on trustworthiness of the
build process. To improve trustworthiness, Provenance is generated in the build
platform’s trusted control plane, which in practice results in it being coarse
grained. For example, in Provenance metadata completeness of
resolvedDependencies
information is on a best-effort basis. Further, the
ResourceDescriptor
type does not require version and license information or
even a URI to the dependency’s original location.
While they likely include similar data, SBOMs and SLSA Provenance operate at
different levels of abstraction. The fine-grained data in an SBOM typically
describes the components present in a produced artifact, whereas SLSA
Provenance more coarsely describes parameters of a build which are external to
the build platform.
The granularity and expressiveness of the two use-cases differs enough that
current SBOM formats were deemed not a good fit for the requirements of
the Build track. Yet SBOMs are a good practice and may form part of a future
SLSA Vulnerabilities track. Further, SLSA Provenance can increase the
trustworthiness of an SBOM by describing how the SBOM was created.
SLSA Provenance, the wider in-toto Attestation Framework in which the
recommended format sits, and the various SBOM standards, are all rapidly
evolving spaces. There is ongoing investigation into linking between the
different formats and exploration of alignment on common models. This FAQ entry
describes our understanding of the intersection efforts today. We do not know
how things will evolve over the coming months and years, but we look forward to
the collaboration and improved software supply chain security.
Q: How to SLSA with a self-hosted runner
Some CI systems allow producers to provide their own self-hosted runners as a build
environment (e.g. GitHub Actions). While there are many valid reasons to leverage
these, classifying the SLSA build level for the resulting artifact can be confusing.
Since the SLSA Build track describes increasing levels of trustworthiness and
completeness in a package artifact’s provenance, interpretation of the
specification hinges on the platform entities involved in the provenance generation.
The SLSA build level requirements (secure key storage, isolation, etc.) should be
imposed on the transitive closure of the systems which are responsible for informing
the provenance generated.
Some common situations may include:
- The platform generates the provenance and just calls a runner for individual items.
In this situation, the provenance is only affected by the platform so there would be
no requirements imposed on the runner.
- The runner generates the provenance. In this situation, the orchestrating platform
is irrelevant and all requirements are imposed on the runner.
- The platform provides the runner with some credentials for generating the provenance
or both the platform and the runner provide information for the provenance. Trust is
shared between the platform and the runner so the requirements are imposed on both.
Additional requirements on the self-hosted runners may be added to Build levels
greater than L3 when such levels get defined.
Future directions
The initial draft version (v0.1) of SLSA had a larger scope including
protections against tampering with source code and a higher level of build
integrity (Build L4). This section collects some early thoughts on how SLSA
might evolve in future versions to re-introduce those notions and add other
additional aspects of automatable supply chain security.
Build track
Build L4
A build L4 could include further hardening of the build platform and enabling
corraboration of the provenance, for example by providing complete knowledge of
the build inputs.
The initial draft version (v0.1) of SLSA defined a “SLSA 4” that included the
following requirements, which may or may not be part of a future Build L4:
- Pinned dependencies, which guarantee that each build runs on exactly the
same set of inputs.
- Hermetic builds, which guarantee that no extraneous dependencies are used.
- All dependencies listed in the provenance, which enables downstream verifiers
to recursively apply SLSA to dependencies.
- Reproducible builds, which enable other build platforms to corroborate the
provenance.
Terminology
Before diving into the SLSA Levels, we need to establish a core set
of terminology and models to describe what we’re protecting.
Software supply chain
TODO: Update the text to match the new diagram.
SLSA’s framework addresses every step of the software supply chain - the
sequence of steps resulting in the creation of an artifact. We represent a
supply chain as a directed acyclic graph of sources, builds, dependencies, and
packages. One artifact’s supply chain is a combination of its dependencies’
supply chains plus its own sources and builds.

Term |
Description |
Example |
Artifact |
An immutable blob of data; primarily refers to software, but SLSA can be used for any artifact. |
A file, a git commit, a directory of files (serialized in some way), a container image, a firmware image. |
Attestation |
An authenticated statement (metadata) about a software artifact or collection of software artifacts. |
A signed SLSA Provenance file. |
Source |
Artifact that was directly authored or reviewed by persons, without modification. It is the beginning of the supply chain; we do not trace the provenance back any further. |
Git commit (source) hosted on GitHub (platform). |
Build |
Process that transforms a set of input artifacts into a set of output artifacts. The inputs may be sources, dependencies, or ephemeral build outputs. |
.travis.yml (process) run by Travis CI (platform). |
Package |
Artifact that is “published” for use by others. In the model, it is always the output of a build process, though that build process can be a no-op. |
Docker image (package) distributed on DockerHub (platform). A ZIP file containing source code is a package, not a source, because it is built from some other source, such as a git commit. |
Dependency |
Artifact that is an input to a build process but that is not a source. In the model, it is always a package. |
Alpine package (package) distributed on Alpine Linux (platform). |
Roles
Throughout the specification, you will see reference to the following roles
that take part in the software supply chain. Note that in practice a role may
be filled by more than one person or an organization. Similarly, a person or
organization may act as more than one role in a particular software supply
chain.
Role |
Description |
Examples |
Producer |
A party who creates software and provides it to others. Producers are often also consumers. |
An open source project’s maintainers. A software vendor. |
Verifier |
A party who inspect an artifact’s provenance to determine the artifact’s authenticity. |
A business’s software ingestion system. A programming language ecosystem’s package registry. |
Consumer |
A party who uses software provided by a producer. The consumer may verify provenance for software they consume or delegate that responsibility to a separate verifier. |
A developer who uses open source software distributions. A business that uses a point of sale system. |
Infrastructure provider |
A party who provides software or services to other roles. |
A package registry’s maintainers. A build platform’s maintainers. |
Package model
Software is distributed in identifiable units called packages
according to the rules and conventions of a package ecosystem.
Examples of formal ecosystems include Python/PyPA,
Debian/Apt, and
OCI, while examples of
informal ecosystems include links to files on a website or distribution of
first-party software within a company.
Abstractly, a consumer locates software within an ecosystem by asking a
package registry to resolve a mutable package name into an
immutable package artifact. To publish a package
artifact, the software producer asks the registry to update this mapping to
resolve to the new artifact. The registry represents the entity or entities with
the power to alter what artifacts are accepted by consumers for a given package
name. For example, if consumers only accept packages signed by a particular
public key, then it is access to that public key that serves as the registry.
The package name is the primary security boundary within a package ecosystem.
Different package names represent materially different pieces of
software—different owners, behaviors, security properties, and so on.
Therefore, the package name is the primary unit being protected in SLSA.
It is the primary identifier to which consumers attach expectations.
Term |
Description |
Package |
An identifiable unit of software intended for distribution, ambiguously meaning either an “artifact” or a “package name”. Only use this term when the ambiguity is acceptable or desirable. |
Package artifact |
A file or other immutable object that is intended for distribution. |
Package ecosystem |
A set of rules and conventions governing how packages are distributed, including how clients resolve a package name into one or more specific artifacts. |
Package manager client |
Client-side tooling to interact with a package ecosystem. |
Package name |
The primary identifier for a mutable collection of artifacts that all represent different versions of the same software. This is the primary identifier that consumers use to obtain the software. A package name is specific to an ecosystem + registry, has a maintainer, is more general than a specific hash or version, and has a “correct” source location. A package ecosystem may group package names into some sort of hierarchy, such as the Group ID in Maven, though SLSA does not have a special term for this. |
Package registry |
An entity responsible for mapping package names to artifacts within a packaging ecosystem. Most ecosystems support multiple registries, usually a single global registry and multiple private registries. |
Publish [a package] |
Make an artifact available for use by registering it with the package registry. In technical terms, this means associating an artifact to a package name. This does not necessarily mean making the artifact fully public; an artifact may be published for only a subset of users, such as internal testing or a closed beta. |
Ambiguous terms to avoid
- Package repository: Could mean either package registry or package name,
depending on the ecosystem. To avoid confusion, we always use “repository”
exclusively to mean “source repository”, where there is no ambiguity.
- Package manager (without “client”): Could mean either package ecosystem,
package registry, or client-side tooling.
Mapping to real-world ecosystems
Most real-world ecosystems fit the package model above but use different terms.
The table below attempts to document how various ecosystems map to the SLSA
Package model. There are likely mistakes and omissions; corrections and
additions are welcome!
Notes:
- Go uses a significantly different distribution model than other ecosystems.
In go, the package name is a source repository URL. While clients can fetch
directly from that URL—in which case there is no “package” or
“registry”—they usually fetch a zip file from a module proxy. The module
proxy acts as both a builder (by constructing the package artifact from
source) and a registry (by mapping package name to package artifact). People
trust the module proxy because builds are independently reproducible and a
checksum database guarantees that all clients receive the same artifact
for a given URL.
Security levels
SLSA is organized into a series of levels and tracks that provide increasing
supply chain security guarantees on various aspects of the supply chain
security. This gives you confidence that software hasn’t been tampered with
and can be securely traced back to its source.
This section is a descriptive overview of the SLSA tracks and levels, describing
their intent. For the prescriptive requirements for each track and level, see
the individual track specifications. For a general overview of SLSA, see
About SLSA.
Levels and tracks
SLSA levels are split into tracks. Each track has its own set of levels that
measure a particular aspect of supply chain security. The purpose of tracks is
to recognize progress made in one aspect of security without blocking on an
unrelated aspect. Tracks also allow the SLSA spec to evolve: we can add more
tracks without invalidating previous levels.
Build track levels
Track/Level |
Requirements |
Focus |
[Build L0] |
(none) |
(n/a) |
[Build L1] |
Provenance showing how the package was built |
Mistakes, documentation |
[Build L2] |
Signed provenance, generated by a hosted build platform |
Tampering after the build |
[Build L3] |
Hardened build platform |
Tampering during the build |
Note: The previous version of the specification used a single unnamed track,
SLSA 1–4. For version 1.0 the Source aspects were removed to focus on the
Build track. A Source track may be added in [future versions].
For more information see the Build track specification.
Source track levels
Track/Level |
Requirements |
Focus |
[Source L0] |
(none) |
(n/a) |
[Source L1] |
Version controlled |
Change tracking |
[Source L2] |
Branch history |
Tampering of source versioning |
[Source L3] |
Authenticatable and Auditable Provenance |
Tampering within the SCS’s storage systems. |
For more information see the Source track specification.
Build Environment track levels
Track/Level |
Requirements |
Focus |
Trust Root |
[BuildEnv L0] |
(none) |
(n/a) |
(n/a) |
[BuildEnv L1] |
Signed build image provenance exists |
Tampering during build image distribution |
Signed build image provenance |
[BuildEnv L2] |
Attested build environment instantiation |
Tampering via the build platform’s control plane |
The compute platform’s host interface |
[BuildEnv L3] |
Hardware-attested build environment |
Tampering via the compute platform’s host interface |
The compute platform’s hardware |
For more information see the Build Environment track specification.
Threats & mitigations
What follows is a comprehensive technical analysis of supply chain threats and
their corresponding mitigations in SLSA. For an introduction to the
supply chain threats that SLSA is aiming to protect against, see Supply chain threats.
The examples on this section are meant to:
- Explain the reasons for each of the SLSA requirements.
- Increase confidence that the SLSA requirements are sufficient to achieve the
desired level of integrity protection.
- Help implementers better understand what they are protecting against so that
they can better design and implement controls.
Overview

This threat model covers the software supply chain, meaning the process by
which software is produced and consumed. We describe and cluster threats based
on where in the software development pipeline those threats occur, labeled (A)
through (I). This is useful because priorities and mitigations mostly cluster
along those same lines. Keep in mind that dependencies are
highly recursive, so each dependency has its own threats
(A) through (I), and the same for their dependencies, and so on. For a more
detailed explanation of the supply chain model, see
Terminology.
Importantly, producers and consumers face aggregate risk across all of the
software they produce and consume, respectively. Many organizations produce
and/or consume thousands of software packages, both first- and third-party, and
it is not practical to rely on every individual team in the organization to do
the right thing. For this reason, SLSA prioritizes mitigations that can be
broadly adopted in an automated fashion, minimizing the chance of mistakes.
Source threats
A source integrity threat is a potential for an adversary to introduce a change
to the source code that does not reflect the intent of the software producer.
This includes the threat of an authorized individual introducing an unauthorized
change—in other words, an insider threat.
SLSA v1.0 does not address source threats, but we anticipate doing so in a
future version. In the meantime, the
threats and potential mitigations listed here show how SLSA v1.0 can fit into a
broader supply chain security program.
(A) Producer
The producer of the software intentionally produces code that harms the
consumer, or the producer otherwise uses practices that are not deserving of the
consumer’s trust.
Threats in this category likely cannot be mitigated through controls placed
during the authoring/reviewing process, in contrast with (B).
Software producer intentionally submits bad code
Threat: Software producer intentionally submits “bad” code, following all
proper processes.
Mitigation: TODO
Example: A popular extension author sells the rights to a new owner, who then
modifies the code to secretly mine cryptocurrency at the users’ expense. SLSA
does not protect against this, though if the extension were open source, regular
auditing may discourage this from happening.
(B) Authoring & reviewing
An adversary introduces a change through the official source control management
interface without any special administrator privileges.
Threats in this category can be mitigated by code review or some other
controls during the authoring/reviewing process, at least in theory. Contrast
this with (A), where such controls are likely ineffective.
(B1) Submit change without review
Directly submit without review
Threat: Submit bad code to the source repository without another person
reviewing.
Mitigation: Source repository requires two-person approval for all changes.
Example: Adversary directly pushes a change to a GitHub repo’s main
branch.
Solution: Configure GitHub’s “branch protection” feature to require pull request
reviews on the main
branch.
Review own change through a sock puppet account
Threat: Propose a change using one account and then approve it using another
account.
Mitigation: Source repository requires approval from two different, trusted
persons. If the proposer is trusted, only one approval is needed; otherwise two
approvals are needed. The software producer maps accounts to trusted persons.
Example: Adversary creates a pull request using a secondary account and then
approves and merges the pull request using their primary account. Solution:
Configure branch protection to require two approvals and ensure that all
repository contributors and owners map to unique persons.
Use a robot account to submit change
Threat: Exploit a robot account that has the ability to submit changes without
two-person review.
Mitigation: All changes require two-person review, even changes authored by
robots.
Example: A file within the source repository is automatically generated by a
robot, which is allowed to submit without review. Adversary compromises the
robot and submits a malicious change without review. Solution: Require human
review for these changes.
Abuse review exceptions
Threat: Exploit a review exception to submit a bad change without review.
Mitigation: All changes require two-person review without exception.
Example: Source repository requires two-person review on all changes except
for “documentation changes,” defined as only touching files ending with .md
or
.html
. Adversary submits a malicious executable named evil.md
without review
using this exception, and then builds a malicious package containing this
executable. This would pass the policy because the source repository is correct,
and the source repository does require two-person review. Solution: Do not allow
such exceptions.
(B2) Evade code review requirements
Modify code after review
Threat: Modify the code after it has been reviewed but before submission.
Mitigation: Source control platform invalidates approvals whenever the
proposed change is modified.
Example: Source repository requires two-person review on all changes.
Adversary sends a “good” pull request to a peer, who approves it. Adversary then
modifies it to contain “bad” code before submitting. Solution: Configure branch
protection to dismiss stale approvals when new changes are pushed.
Note: This is not currently a SLSA requirement because the productivity hit is
considered too great to outweigh the security benefit. The cost of code review
is already too high for most projects, given current code review tooling, so
making code review even costlier would not further our goals. However, this
should be considered for future SLSA revisions once the state-of-the-art for
code review has improved and the cost can be minimized.
Submit a change that is unreviewable
Threat: Send a change that is meaningless for a human to review that looks
benign but is actually malicious.
Mitigation: Code review system ensures that all reviews are informed and
meaningful.
Example: A proposed change updates a file, but the reviewer is only presented
with a diff of the cryptographic hash, not of the file contents. Thus, the
reviewer does not have enough context to provide a meaningful review. Solution:
the code review system should present the reviewer with a content diff or some
other information to make an informed decision.
Copy a reviewed change to another context
Threat: Get a change reviewed in one context and then transfer it to a
different context.
Mitigation: Approvals are context-specific.
Example: MyPackage’s source repository requires two-person review. Adversary
forks the repo, submits a change in the fork with review from a colluding
colleague (who is not trusted by MyPackage), then merges the change back into
the upstream repo. Solution: The merge should still require review, even though
the fork was reviewed.
Compromise another account
Threat: Compromise one or more trusted accounts and use those to submit and
review own changes.
Mitigation: Source control platform verifies two-factor authentication, which
increases the difficulty of compromising accounts.
Example: Trusted person uses a weak password on GitHub. Adversary guesses the
weak password, logs in, and pushes changes to a GitHub repo. Solution: Configure
GitHub organization to requires 2FA for all trusted persons. This would increase
the difficulty of using the compromised password to log in to GitHub.
Hide bad change behind good one
Threat: Request review for a series of two commits, X and Y, where X is bad
and Y is good. Reviewer thinks they are approving only the final Y state whereas
they are also implicitly approving X.
Mitigation: Only the version that is actually reviewed is the one that is
approved. Any intermediate revisions don’t count as being reviewed.
Example: Adversary sends a pull request containing malicious commit X and
benign commit Y that undoes X. In the pull request UI, reviewer only reviews and
approves “changes from all commits”, which is a delta from HEAD to Y; they don’t
see X. Adversary then builds from the malicious revision X. Solution: Policy
does not accept this because the version X is not considered reviewed.
(B3) Render code review ineffective
Collude with another trusted person
Threat: Two trusted persons collude to author and approve a bad change.
Mitigation: This threat is not currently addressed by SLSA. We use “two
trusted persons” as a proxy for “intent of the software producer”.
Trick reviewer into approving bad code
Threat: Construct a change that looks benign but is actually malicious, a.k.a.
“bugdoor.”
Mitigation: This threat is not currently addressed by SLSA.
Reviewer blindly approves changes
Threat: Reviewer approves changes without actually reviewing, a.k.a. “rubber
stamping.”
Mitigation: This threat is not currently addressed by SLSA.
(C) Source code management
An adversary introduces a change to the source control repository through an
administrative interface, or through a compromise of the underlying
infrastructure.
Project owner bypasses or disables controls
Threat: Trusted person with “admin” privileges in a repository submits “bad”
code bypassing existing controls.
Mitigation: All persons are subject to same controls, whether or not they have
administrator privileges. Disabling the controls requires two-person review (and
maybe notifies other trusted persons?)
Example 1: GitHub project owner pushes a change without review, even though
GitHub branch protection is enabled. Solution: Enable the “Include
Administrators” option for the branch protection.
Example 2: GitHub project owner disables “Include Administrators”, pushes a
change without review, then re-enables “Include Administrators”. This currently
has no solution on GitHub.
Platform admin abuses privileges
Threat: Platform administrator abuses their privileges to bypass controls or
to push a malicious version of the software.
Mitigation: The source platform must have controls in place to prevent and
detect abusive behavior from administrators (e.g. two-person approvals for
changes to the infrastructure, audit logging). A future Platform
Operations Track may
provide more specific guidance on how to secure the underlying platform.
Example 1: GitHostingService employee uses an internal tool to push changes to
the MyPackage source repo.
Example 2: GitHostingService employee uses an internal tool to push a
malicious version of the server to serve malicious versions of MyPackage sources
to a specific CI/CD client but the regular version to everyone else, in order to
hide tracks.
Example 3: GitHostingService employee uses an internal tool to push a
malicious version of the server that includes a backdoor allowing specific users
to bypass branch protections. Adversary then uses this backdoor to submit a
change to MyPackage without review.
Exploit vulnerability in SCM
Threat: Exploit a vulnerability in the implementation of the source code
management system to bypass controls.
Mitigation: This threat is not currently addressed by SLSA.
Build threats
A build integrity threat is a potential for an adversary to introduce behavior
to an artifact without changing its source code, or to build from a
source, dependency, and/or process that is not intended by the software
producer.
The SLSA Build track mitigates these threats when the consumer
verifies artifacts against expectations, confirming
that the artifact they received was built in the expected manner.
(D) External build parameters
An adversary builds from a version of the source code that does not match the
official source control repository, or changes the build parameters to inject
behavior that was not intended by the official source.
The mitigation here is to compare the provenance against expectations for the
package, which depends on SLSA Build L1 for provenance. (Threats against the
provenance itself are covered by (E) and (F).)
Build from unofficial fork of code (expectations)
Threat: Build using the expected CI/CD process but from an unofficial fork of
the code that may contain unauthorized changes.
Mitigation: Verifier requires the provenance’s source location to match an
expected value.
Example: MyPackage is supposed to be built from GitHub repo good/my-package
.
Instead, it is built from evilfork/my-package
. Solution: Verifier rejects
because the source location does not match.
Build from unofficial branch or tag (expectations)
Threat: Build using the expected CI/CD process and source location, but
checking out an “experimental” branch or similar that may contain code not
intended for release.
Mitigation: Verifier requires that the provenance’s source branch/tag matches
an expected value, or that the source revision is reachable from an expected
branch.
Example: MyPackage’s releases are tagged from the main
branch, which has
branch protections. Adversary builds from the unprotected experimental
branch
containing unofficial changes. Solution: Verifier rejects because the source
revision is not reachable from main
.
Build from unofficial build steps (expectations)
Threat: Build the package using the proper CI/CD platform but with unofficial
build steps.
Mitigation: Verifier requires that the provenance’s build configuration source
matches an expected value.
Example: MyPackage is expected to be built by Google Cloud Build using the
build steps defined in the source’s cloudbuild.yaml
file. Adversary builds
with Google Cloud Build, but using custom build steps provided over RPC.
Solution: Verifier rejects because the build steps did not come from the
expected source.
Build from unofficial parameters (expectations)
Threat: Build using the expected CI/CD process, source location, and
branch/tag, but using a parameter that injects unofficial behavior.
Mitigation: Verifier requires that the provenance’s external parameters all
match expected values.
Example 1: MyPackage is supposed to be built from the release.yml
workflow.
Adversary builds from the debug.yml
workflow. Solution: Verifier rejects
because the workflow parameter does not match the expected value.
Example 2: MyPackage’s GitHub Actions Workflow uses github.event.inputs
to
allow users to specify custom compiler flags per invocation. Adversary sets a
compiler flag that overrides a macro to inject malicious behavior into the
output binary. Solution: Verifier rejects because the inputs
parameter was not
expected.
Build from modified version of code modified after checkout (expectations)
Threat: Build from a version of the code that includes modifications after
checkout.
Mitigation: Build platform pulls directly from the source repository and
accurately records the source location in provenance.
Example: Adversary fetches from MyPackage’s source repo, makes a local commit,
then requests a build from that local commit. Builder records the fact that it
did not pull from the official source repo. Solution: Verifier rejects because
the source repo does not match the expected value.
(E) Build process
An adversary introduces an unauthorized change to a build output through
tampering of the build process; or introduces false information into the
provenance.
These threats are directly addressed by the SLSA Build track.
Forge values of the provenance (other than output digest) (Build L2+)
Threat: Generate false provenance and get the trusted control plane to sign
it.
Mitigation: At Build L2+, the trusted control plane generates all
information that goes in the provenance, except (optionally) the output artifact
hash. At Build L3+, this is hardened to prevent compromise even
by determined adversaries.
Example 1 (Build L2): Provenance is generated on the build worker, which the
adversary has control over. Adversary uses a malicious process to get the build
platform to claim that it was built from source repo good/my-package
when it
was really built from evil/my-package
. Solution: Builder generates and signs
the provenance in the trusted control plane; the worker reports the output
artifacts but otherwise has no influence over the provenance.
Example 2 (Build L3): Provenance is generated in the trusted control plane,
but workers can break out of the container to access the signing material.
Solution: Builder is hardened to provide strong isolation against tenant
projects.
Forge output digest of the provenance (n/a)
Threat: The tenant-controlled build process sets output artifact digest
(subject
in SLSA Provenance) without the trusted control plane verifying that
such an artifact was actually produced.
Mitigation: None; this is not a problem. Any build claiming to produce a given
artifact could have actually produced it by copying it verbatim from input to
output. (Reminder: Provenance is only a claim that a particular
artifact was built, not that it was published to a particular registry.)
Example: A legitimate MyPackage artifact has digest abcdef
and is built
from source repo good/my-package
. A malicious build from source repo
evil/my-package
claims that it built artifact abcdef
when it did not.
Solution: Verifier rejects because the source location does not match; the
forged digest is irrelevant.
Compromise project owner (Build L2+)
Threat: An adversary gains owner permissions for the artifact’s build project.
Mitigation: The build project owner must not have the ability to influence the
build process or provenance generation.
Example: MyPackage is built on Awesome Builder under the project “mypackage”.
Adversary is an administrator of the “mypackage” project. Awesome Builder allows
administrators to debug build machines via SSH. An adversary uses this feature
to alter a build in progress.
Compromise other build (Build L3)
Threat: Perform a malicious build that alters the behavior of a benign
build running in parallel or subsequent environments.
Mitigation: Builds are isolated from one another, with no way for one to
affect the other or persist changes.
Example 1: A build platform runs all builds for project MyPackage on
the same machine as the same Linux user. An adversary starts a malicious build
that listens for another build and swaps out source files, then starts a benign
build. The benign build uses the malicious build’s source files, but its
provenance says it used benign source files. Solution: The build platform
changes architecture to isolate each build in a separate VM or similar.
Example 2: A build platform uses the same machine for subsequent
builds. An adversary first runs a build that replaces the make
binary with a
malicious version, then subsequently runs an otherwise benign build. Solution:
The builder changes architecture to start each build with a clean machine image.
Steal cryptographic secrets (Build L3)
Threat: Use or exfiltrate the provenance signing key or some other
cryptographic secret that should only be available to the build platform.
Mitigation: Builds are isolated from the trusted build platform control
plane, and only the control plane has access to cryptographic
secrets.
Example: Provenance is signed on the build worker, which the adversary has
control over. Adversary uses a malicious process that generates false provenance
and signs it using the provenance signing key. Solution: Builder generates and
signs provenance in the trusted control plane; the worker has no access to the
key.
Poison the build cache (Build L3)
Threat: Add a malicious artifact to a build cache that is later picked up by a
benign build process.
Mitigation: Build caches must be isolate between builds to prevent
such cache poisoning attacks.
Example: Build platform uses a build cache across builds, keyed by the hash of
the source file. Adversary runs a malicious build that creates a “poisoned”
cache entry with a falsified key, meaning that the value wasn’t really produced
from that source. A subsequent build then picks up that poisoned cache entry.
Compromise build platform admin (verification)
Threat: An adversary gains admin permissions for the artifact’s build platform.
Mitigation: The build platform must have controls in place to prevent and
detect abusive behavior from administrators (e.g. two-person approvals, audit
logging).
Example: MyPackage is built on Awesome Builder. Awesome Builder allows
engineers on-call to SSH into build machines to debug production issues. An
adversary uses this access to modify a build in progress. Solution: Consumers
do not accept provenance from the build platform unless they trust sufficient
controls are in place to prevent abusing admin privileges.
(F) Artifact publication
An adversary uploads a package artifact that does not reflect the intent of the
package’s official source control repository.
This is the most direct threat because it is the easiest to pull off. If there
are no mitigations for this threat, then (D) and (E) are often indistinguishable
from this threat.
Build with untrusted CI/CD (expectations)
Threat: Build using an unofficial CI/CD pipeline that does not build in the
correct way.
Mitigation: Verifier requires provenance showing that the builder matched an
expected value.
Example: MyPackage is expected to be built on Google Cloud Build, which is
trusted up to Build L3. Adversary builds on SomeOtherBuildPlatform, which is only
trusted up to Build L2, and then exploits SomeOtherBuildPlatform to inject
malicious behavior. Solution: Verifier rejects because builder is not as
expected.
Upload package without provenance (Build L1)
Threat: Upload a package without provenance.
Mitigation: Verifier requires provenance before accepting the package.
Example: Adversary uploads a malicious version of MyPackage to the package
repository without provenance. Solution: Verifier rejects because provenance is
missing.
Tamper with artifact after CI/CD (Build L1)
Threat: Take a benign version of the package, modify it in some way, then
re-upload it using the original provenance.
Mitigation: Verifier checks that the provenance’s subject
matches the hash
of the package.
Example: Adversary performs a proper build, modifies the artifact, then
uploads the modified version of the package to the repository along with the
provenance. Solution: Verifier rejects because the hash of the artifact does not
match the subject
found within the provenance.
Tamper with provenance (Build L2)
Threat: Perform a build that would not meet expectations, then modify the
provenance to make the expectations checks pass.
Mitigation: Verifier only accepts provenance with a valid cryptographic
signature or equivalent proving that the provenance came from an
acceptable builder.
Example: MyPackage is expected to be built by GitHub Actions from the
good/my-package
repo. Adversary builds with GitHub Actions from the
evil/my-package
repo and then modifies the provenance so that the source looks
like it came from good/my-package
. Solution: Verifier rejects because the
cryptographic signature is no longer valid.
(G) Distribution channel
An adversary modifies the package on the package registry using an
administrative interface or through a compromise of the infrastructure
including modification of the package in transit to the consumer.
The distribution channel threats and mitigations look very similar to the
Artifact Publication (F) threats and mitigations with the main difference
being that these threats are mitigated by having the consumer perform
verification.
The consumer’s actions may be simplified if (F) produces a VSA.
In this case the consumer may replace provenance verification with
VSA verification.
Build with untrusted CI/CD (expectations)
Threat: Replace the package with one built using an unofficial CI/CD pipeline
that does not build in the correct way.
Mitigation: Verifier requires provenance showing that the builder matched an
expected value or a VSA for corresponding resourceUri
.
Example: MyPackage is expected to be built on Google Cloud Build, which is
trusted up to Build L3. Adversary builds on SomeOtherBuildPlatform, which is only
trusted up to Build L2, and then exploits SomeOtherBuildPlatform to inject
malicious behavior. Adversary then replaces the original package within the
repository with the malicious package. Solution: Verifier rejects because
builder is not as expected.
Issue VSA from untrusted intermediary (expectations)
Threat: Have an unofficial intermediary issue a VSA for a malicious package.
Mitigation: Verifier requires VSAs to be issued by a trusted intermediary.
Example: Verifier expects VSAs to be issued by TheRepository. Adversary
builds a malicious package and then issues a VSA of their own for the malicious
package. Solution: Verifier rejects because they only accept VSAs from
TheRepository which the adversary cannot issue since they do not have the
corresponding signing key.
Upload package without provenance or VSA (Build L1)
Threat: Replace the original package with a malicious one without provenance.
Mitigation: Verifier requires provenance or a VSA before accepting the package.
Example: Adversary replaces MyPackage with a malicious version of MyPackage
on the package repository and deletes existing provenance. Solution: Verifier
rejects because provenance is missing.
Replace package and VSA with another (expectations)
Threat: Replace a package and its VSA with a malicious package and its valid VSA.
Mitigation: Consumer ensures that the VSA matches the package they’ve requested (not just the package they received) by following the verification process.
Example: Adversary uploads a malicious package to repo/evil-package
,
getting a valid VSA for repo/evil-package
. Adversary then replaces
repo/my-package
and its VSA with repo/evil-package
and its VSA.
Solution: Verifier rejects because the VSA resourceUri
field lists
repo/evil-package
and not the expected repo/my-package
.
Tamper with artifact after upload (Build L1)
Threat: Take a benign version of the package, modify it in some way, then
replace it while retaining the original provenance or VSA.
Mitigation: Verifier checks that the provenance or VSA’s subject
matches
the hash of the package.
Example: Adversary performs a proper build, modifies the artifact, then
replaces the modified version of the package in the repository and retains the
original provenance. Solution: Verifier rejects because the hash of the
artifact does not match the subject
found within the provenance.
Tamper with provenance or VSA (Build L2)
Threat: Perform a build that would not meet expectations, then modify the
provenance or VSA to make the expectations checks pass.
Mitigation: Verifier only accepts provenance or VSA with a valid cryptographic
signature or equivalent proving that the provenance came from an
acceptable builder or the VSA came from an expected verifier.
Example 1: MyPackage is expected to be built by GitHub Actions from the
good/my-package
repo. Adversary builds with GitHub Actions from the
evil/my-package
repo and then modifies the provenance so that the source looks
like it came from good/my-package
. Solution: Verifier rejects because the
cryptographic signature is no longer valid.
Example 2: Verifier expects VSAs to be issued by TheRepository. Adversary
builds a malicious package and then modifies the original VSA’s subject
field to match the digest of the malicious package. Solution: Verifier rejects
because the cryptographic signature is no longer valid.
Usage threats
A usage threat is a potential for an adversary to exploit behavior of the
consumer.
(H) Package selection
The consumer requests a package that it did not intend.
Dependency confusion
Threat: Register a package name in a public registry that shadows a name used
on the victim’s internal registry, and wait for a misconfigured victim to fetch
from the public registry instead of the internal one.
Mitigation: The mitigation is for the software producer to build internal
packages on a SLSA Level 2+ compliant build system and define expectations for
build provenance. Expectations must be verified on installation of the internal
packages. If a misconfigured victim attempts to install attacker’s package with
an internal name but from the public registry, then verification against
expectations will fail.
For more information see Verifying artifacts
and Defender’s Perspective: Dependency Confusion and Typosquatting Attacks.
Typosquatting
Threat: Register a package name that is similar looking to a popular package
and get users to use your malicious package instead of the benign one.
Mitigation: This threat is not currently addressed by SLSA. That said, the
requirement to make the source available can be a mild deterrent, can aid
investigation or ad-hoc analysis, and can complement source-based typosquatting
solutions.
(I) Usage
The consumer uses a package in an unsafe manner.
Improper usage
Threat: The software can be used in an insecure manner, allowing an
adversary to compromise the consumer.
Mitigation: This threat is not addressed by SLSA, but may be addressed by
efforts like Secure by Design.
Dependency threats
A dependency threat is a potential for an adversary to introduce unintended
behavior in one artifact by compromising some other artifact that the former
depends on at build time. (Runtime dependencies are excluded from the model, as
noted below.)
Unlike other threat categories, dependency threats develop recursively through
the supply chain and can only be exploited indirectly. For example, if
application A includes library B as part of its build process, then a build
or source threat to B is also a dependency threat to A. Furthermore, if
library B uses build tool C, then a source or build threat to C is also a
dependency threat to both A and B.
This version of SLSA does not explicitly address dependency threats, but we
expect that a future version will. In the meantime, you can apply SLSA
recursively to your dependencies in order to reduce the risk of dependency
threats.
Build dependency
An adversary compromises the target artifact through one of its build
dependencies. Any artifact that is present in the build environment and has the
ability to influence the output is considered a build dependency.
Include a vulnerable dependency (library, base image, bundled file, etc.)
Threat: Statically link, bundle, or otherwise include an artifact that is
compromised or has some vulnerability, causing the output artifact to have the
same vulnerability.
Example: The C++ program MyPackage statically links libDep at build time. A
contributor accidentally introduces a security vulnerability into libDep. The
next time MyPackage is built, it picks up and includes the vulnerable version of
libDep, resulting in MyPackage also having the security vulnerability.
Mitigation: TODO
Use a compromised build tool (compiler, utility, interpreter, OS package, etc.)
Threat: Use a compromised tool or other software artifact during the build
process, which alters the build process and injects unintended behavior into the
output artifact.
Mitigation: This can be partially mitigated by treating build tooling,
including OS images, as any other artifact to be verified prior to use.
The threats described in this document apply recursively to build tooling
as do the mitigations and examples. A future
Build Environment track may
provide more comprehensive guidance on how to address more specfiic
aspects of this threat.
Example: MyPackage is a tarball containing an ELF executable, created by
running /usr/bin/tar
during its build process. An adversary compromises the
tar
OS package such that /usr/bin/tar
injects a backdoor into every ELF
executable it writes. The next time MyPackage is built, the build picks up the
vulnerable tar
package, which injects the backdoor into the resulting
MyPackage artifact. Solution: apply SLSA recursively to all build tools
prior to the build. The build platform verifies the disk image,
or the individual components on the disk image, against the associated
provenance or VSAs prior to running a build. Depending on where the initial
compromise took place (i.e. before/during vs after the build of the build tool itself), the modified /usr/bin/tar
will fail this verification.
Reminder: dependencies that look like runtime dependencies
actually become build dependencies if they get loaded at build time.
Use a compromised runtime dependency during the build (for tests, dynamic linking, etc.)
Threat: During the build process, use a compromised runtime dependency (such
as during testing or dynamic linking), which alters the build process and
injects unwanted behavior into the output.
NOTE: This is technically the same case as Use a compromised build
tool. We call it out to remind the reader that
runtime dependencies can become build dependencies if they are
loaded during the build.
Example: MyPackage has a runtime dependency on package Dep, meaning that Dep
is not included in MyPackage but required to be installed on the user’s machine
at the time MyPackage is run. However, Dep is also loaded during the build
process of MyPackage as part of a test. An adversary compromises Dep such that,
when run during a build, it injects a backdoor into the output artifact. The
next time MyPackage is built, it picks up and loads Dep during the build
process. The malicious code then injects the backdoor into the new MyPackage
artifact.
Mitigation: In addition to all the mitigations for build tools, you can often
avoid runtime dependencies becoming build dependencies by isolating tests to a
separate environment that does not have write access to the output artifact.
The following threats are related to “dependencies” but are not modeled as
“dependency threats”.
Use a compromised dependency at runtime (modeled separately)
Threat: Load a compromised artifact at runtime, thereby compromising the user
or environment where the software ran.
Example: MyPackage lists package Dep as a runtime dependency. Adversary
publishes a compromised version of Dep that runs malicious code on the user’s
machine when Dep is loaded at runtime. An end user installs MyPackage, which in
turn installs the compromised version of Dep. When the user runs MyPackage, it
loads and executes the malicious code from Dep.
Mitigation: N/A - This threat is not currently addressed by SLSA. SLSA’s
threat model does not explicitly model runtime dependencies. Instead, each
runtime dependency is considered a distinct artifact with its own threats.
Availability threats
An availability threat is a potential for an adversary to deny someone from
reading a source and its associated change history, or from building a package.
SLSA v1.0 does not address availability threats, though future versions might.
(A)(B) Delete the code
Threat: Perform a build from a particular source revision and then delete that
revision or cause it to get garbage collected, preventing anyone from inspecting
the code.
Mitigation: Some system retains the revision and its version control history,
making it available for inspection indefinitely. Users cannot delete the
revision except as part of a transparent legal or privacy process.
Example: An adversary submits malicious code to the MyPackage GitHub repo,
builds from that revision, then does a force push to erase that revision from
history (or requests that GitHub delete the repo.) This would make the revision
unavailable for inspection. Solution: Verifier rejects the package because it
lacks a positive attestation showing that some system, such as GitHub, ensured
retention and availability of the source code.
A dependency becomes temporarily or permanently unavailable to the build process
Threat: Unable to perform a build with the intended dependencies.
Mitigation: This threat is not currently addressed by SLSA. That said, some
solutions to support hermetic and reproducible builds may also reduce the
impact of this threat.
De-list artifact
Threat: The package registry stops serving the artifact.
Mitigation: N/A - This threat is not currently addressed by SLSA.
De-list provenance
Threat: The package registry stops serving the provenance.
Mitigation: N/A - This threat is not currently addressed by SLSA.
Verification threats
Threats that can compromise the ability to prevent or detect the supply chain
security threats above.
Tamper with recorded expectations
Threat: Modify the verifier’s recorded expectations, causing the verifier to
accept an unofficial package artifact.
Mitigation: Changes to recorded expectations requires some form of
authorization, such as two-party review.
Example: The package ecosystem records its expectations for a given package
name in a configuration file that is modifiable by that package’s producer. The
configuration for MyPackage expects the source repository to be
good/my-package
. The adversary modifies the configuration to also accept
evil/my-package
, and then builds from that repository and uploads a malicious
version of the package. Solution: Changes to the recorded expectations require
two-party review.
Forge change metadata
Threat: Forge the change metadata to alter attribution, timestamp, or
discoverability of a change.
Mitigation: Source control platform strongly authenticates actor identity,
timestamp, and parent revisions.
Example: Adversary submits a git commit with a falsified author and timestamp,
and then rewrites history with a non-fast-forward update to make it appear to
have been made long ago. Solution: Consumer detects this by seeing that such
changes are not strongly authenticated and thus not trustworthy.
Exploit cryptographic hash collisions
Threat: Exploit a cryptographic hash collision weakness to bypass one of the
other controls.
Mitigation: Require cryptographically secure hash functions for commit
checksums and provenance subjects, such as SHA-256.
Examples: Construct a benign file and a malicious file with the same SHA-1
hash. Get the benign file reviewed and then submit the malicious file.
Alternatively, get the benign file reviewed and submitted and then build from
the malicious file. Solution: Only accept cryptographic hashes with strong
collision resistance.
Software attestations
A software attestation is an authenticated statement (metadata) about a
software artifact or collection of software artifacts.
The primary intended use case is to feed into automated policy engines, such as
in-toto and Binary Authorization.
This section provides a high-level overview of the attestation model, including
standardized terminology, data model, layers, conventions for software
attestations, and formats for different use cases.
Overview
A software attestation, not to be confused with a remote attestation in
the trusted computing world, is an authenticated statement (metadata) about a
software artifact or collection of software artifacts. Software attestations
are a generalization of raw artifact/code signing.
With raw signing, a signature is directly over the artifact (or a hash of the
artifact) and implies a single bit of metadata about the artifact, based on
the public key. The exact meaning MUST be negotiated between signer and
verifier, and a new keyset MUST be provisioned for each bit of information. For
example, a signature might denote who produced an artifact, or it might denote
fitness for some purpose, or something else entirely.
With an attestation, the metadata is explicit and the signature only denotes
who created the attestation (authenticity). A single keyset can express an
arbitrary amount of information, including things that are not possible with
raw signing. For example, an attestation might state exactly how an artifact
was produced, including the build command that was run and all of its
dependencies (as in the case of SLSA Provenance).
This subsection explains how to choose the attestation format that’s best suited
for your situation by considering factors such as intended use and who will be
consuming the attestation.
First party
Producers of first party code might consider the following questions:
- Will SLSA be used only within our organization?
- Is SLSA’s primary use case to manage insider risk?
- Are we developing entirely in a closed source environment?
If these are the main considerations, the organization can choose any format
for internal use. To make an external claim of meeting a SLSA level, however,
there needs to be a way for external users to consume and verify your provenance.
Currently, SLSA recommends using the SLSA Provenance format for SLSA
attestations since it is easy to verify using the Generic SLSA Verifier.
Open source
Producers of open source code might consider these questions:
- Is SLSA’s primary use case to convey trust in how your code was developed?
- Do you develop software with standard open source licenses?
- Will the code be consumed by others?
In these situations, we encourage you to use the SLSA Provenance format. The SLSA
Provenance format offers a path towards interoperability and cohesion across the open
source ecosystem. Users can verify any provenance statement in this format
using the Generic SLSA Verifier.
Closed source, third party
Producers of closed source code that is consumed by others might consider
the following questions:
- Is my code produced for the sole purpose of specific third party consumers?
- Is SLSA’s primary use case to create trust in our organization or to comply with
audits and legal requirements?
In these situations, you might not want to make all the details of your
provenance available externally. Consider using Verification Summary
Attestations (VSAs) to summarize provenance information in a sanitized way
that’s safe for external consumption. For more about VSAs, see the Verification
Summary Attestation section.
Model and Terminology
We define the following model to represent any software attestations, regardless
of format. Not all formats will have all fields or all layers, but to be called
a “software attestation” it MUST fit this general model.
The key words MUST, SHOULD, and MAY are to be interpreted as described in
RFC 2119.

An example of an attestation in English follows with the components of the
attestation mapped to the component names (and colors from the model diagram above):

Components:
- Artifact: Immutable blob of data described by an attestation, usually
identified by cryptographic content hash. Examples: file content, git
commit, container digest. MAY also include a mutable locator, such as
a package name or URI.
- Attestation: Authenticated, machine-readable metadata about one or more
software artifacts. An attestation MUST contain at least:
- Envelope: Authenticates the message. At a minimum, it MUST contain:
- Message: Content (statement) of the attestation. The message
type SHOULD be authenticated and unambiguous to avoid confusion
attacks.
- Signature: Denotes the attester who created the attestation.
- Statement: Binds the attestation to a particular set of artifacts.
This is a separate layer to allow for predicate-agnostic processing
and storage/lookup. MUST contain at least:
- Subject: Identifies which artifacts the predicate applies to.
- Predicate: Metadata about the subject. The predicate type SHOULD
be explicit to avoid misinterpretation.
- Predicate: Arbitrary metadata in a predicate-specific schema. MAY
contain:
- Link: (repeated) Reference to a related artifact, such as
build dependency. Effectively forms a hypergraph where the
nodes are artifacts and the hyperedges are attestations. It is
helpful for the link to be standardized to allow predicate-agnostic
graph processing.
- Bundle: A collection of Attestations, which are usually but not
necessarily related.
- Storage/Lookup: Convention for where attesters place attestations and
how verifiers find attestations for a given artifact.
Recommended Suite
We recommend a single suite of formats and conventions that work well together
and have desirable security properties. Our hope is to align the industry around
this particular suite because it makes everything easier. That said, we
recognize that other choices MAY be necessary in various cases.
Provenance
To trace software back to the source and define the moving parts in a complex
supply chain, provenance needs to be there from the very beginning. It’s the
verifiable information about software artifacts describing where, when and how
something was produced. For higher SLSA levels and more resilient integrity
guarantees, provenance requirements are stricter and need a deeper, more
technical understanding of the predicate.
This document defines the following predicate type within the in-toto
attestation framework:
"predicateType": "https://slsa.dev/provenance/v1"
Important: Always use the above string for predicateType
rather than what is
in the URL bar. The predicateType
URI will always resolve to the latest
minor version of this specification. See parsing rules for
more information.
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be
interpreted as described in RFC 2119.
Purpose
Describe how an artifact or set of artifacts was produced so that:
- Consumers of the provenance can verify that the artifact was built according
to expectations.
- Others can rebuild the artifact, if desired.
This predicate is the RECOMMENDED way to satisfy the SLSA v1.0 provenance
requirements.
Model
Provenance is an attestation that a particular build platform produced a set of
software artifacts through execution of the buildDefinition
.

The model is as follows:
-
Each build runs as an independent process on a multi-tenant build platform.
The builder.id
identifies this platform, representing the transitive
closure of all entities that are trusted to faithfully run the build and
record the provenance. (Note: The same model can be used for platform-less
or single-tenant build platforms.)
- The build platform implementer SHOULD define a security model for the build
platform in order to clearly identify the platform’s boundaries, actors,
and interfaces. This model SHOULD then be used to identify the transitive
closure of the trusted build platform for the
builder.id
as well as the
trusted control plane.
-
The build process is defined by a parameterized template, identified by
buildType
. This encapsulates the process that ran, regardless of what
platform ran it. Often the build type is specific to the build platform
because most build platforms have their own unique interfaces.
-
All top-level, independent inputs are captured by the parameters to the
template. There are two types of parameters:
-
externalParameters
: the external interface to the build. In SLSA,
these values are untrusted; they MUST be included in the provenance and
MUST be verified downstream.
-
internalParameters
: set internally by the platform. In SLSA, these
values are trusted because the platform is trusted; they are OPTIONAL
and need not be verified downstream. They MAY be included to enable
reproducible builds, debugging, or incident response.
-
All artifacts fetched during initialization or execution of the build
process are considered dependencies, including those referenced directly by
parameters. The resolvedDependencies
captures these dependencies, if
known. For example, a build that takes a git repository URI as a parameter
might record the specific git commit that the URI resolved to as a
dependency.
-
During execution, the build process might communicate with the build
platform’s control plane and/or build caches. This communication is not
captured directly in the provenance, but is instead implied by builder.id
and subject to SLSA Build Requirements. Such
communication SHOULD NOT influence the definition of the build; if it does,
it SHOULD go in resolvedDependencies
instead.
-
Finally, the build process outputs one or more artifacts, identified by
subject
.
For concrete examples, see index of build types.
Parsing rules
This predicate follows the in-toto attestation parsing rules. Summary:
- Consumers MUST ignore unrecognized fields unless otherwise noted.
- The
predicateType
URI includes the major version number and will always
change whenever there is a backwards incompatible change.
- Minor version changes are always backwards compatible and “monotonic.”
Such changes do not update the
predicateType
.
- Unset, null, and empty field values MUST be interpreted equivalently.
Schema
Summary
NOTE: This summary (in cue) is informative. In the event of a
disagreement with the text description, the text is authoritative.
{% include_relative schema/provenance.cue %}
Protocol buffer schema
NOTE: This summary (in protobuf) is informative. In the event of a
disagreement with the text description, the text is authoritative.
Link: provenance.proto
NOTE: This protobuf definition prioritises being a human-readable summary
of the schema for readers of the specification. A version of the protobuf
definition useful for code generation is maintained in the
in-toto attestation repository.
{% include_relative schema/provenance.proto %}
Provenance
NOTE: This subsection describes the fields within predicate
. For a description
of the other top-level fields, such as subject
, see Statement.
REQUIRED for SLSA Build L1: buildDefinition
, runDetails
Field | Type | Description
|
---|
buildDefinition
| BuildDefinition |
The input to the build. The accuracy and completeness are implied by
runDetails.builder.id .
|
runDetails
| RunDetails |
Details specific to this particular execution of the build.
|
BuildDefinition
REQUIRED for SLSA Build L1: buildType
, externalParameters
Field | Type | Description
|
---|
buildType
| string (TypeURI) |
Identifies the template for how to perform the build and interpret the
parameters and dependencies.
The URI SHOULD resolve to a human-readable specification that includes: overall
description of the build type; schema for externalParameters and
internalParameters ; unambiguous instructions for how to initiate the build given
this BuildDefinition, and a complete example. Example:
https://slsa-framework.github.io/github-actions-buildtypes/workflow/v1
|
externalParameters
| object |
The parameters that are under external control, such as those set by a user or
tenant of the build platform. They MUST be complete at SLSA Build L3, meaning that
that there is no additional mechanism for an external party to influence the
build. (At lower SLSA Build levels, the completeness MAY be best effort.)
The build platform SHOULD be designed to minimize the size and complexity of
externalParameters , in order to reduce fragility and ease verification.
Consumers SHOULD have an expectation of what “good” looks like; the more
information that they need to check, the harder that task becomes.
Verifiers SHOULD reject unrecognized or unexpected fields within
externalParameters .
|
internalParameters
| object |
The parameters that are under the control of the entity represented by
builder.id . The primary intention of this field is for debugging, incident
response, and vulnerability management. The values here MAY be necessary for
reproducing the build. There is no need to verify these
parameters because the build platform is already trusted, and in many cases it is
not practical to do so.
|
resolvedDependencies
| array (ResourceDescriptor) |
Unordered collection of artifacts needed at build time. Completeness is best
effort, at least through SLSA Build L3. For example, if the build script
fetches and executes “example.com/foo.sh”, which in turn fetches
“example.com/bar.tar.gz”, then both “foo.sh” and “bar.tar.gz” SHOULD be
listed here.
|
The BuildDefinition describes all of the inputs to the build. It SHOULD contain
all the information necessary and sufficient to initialize the build and begin
execution.
The externalParameters
and internalParameters
are the top-level inputs to the
template, meaning inputs not derived from another input. Each is an arbitrary
JSON object, though it is RECOMMENDED to keep the structure simple with string
values to aid verification. The same field name SHOULD NOT be used for both
externalParameters
and internalParameters
.
The parameters SHOULD only contain the actual values passed in through the
interface to the build platform. Metadata about those parameter values,
particularly digests of artifacts referenced by those parameters, SHOULD instead
go in resolvedDependencies
. The documentation for buildType
SHOULD explain
how to convert from a parameter to the dependency uri
. For example:
"externalParameters": {
"repository": "https://github.com/octocat/hello-world",
"ref": "refs/heads/main"
},
"resolvedDependencies": [{
"uri": "git+https://github.com/octocat/hello-world@refs/heads/main",
"digest": {"gitCommit": "7fd1a60b01f91b314f59955a4e4d4e80d8edf11d"}
}]
Guidelines:
-
Maximize the amount of information that is implicit from the meaning of
buildType
. In particular, any value that is boilerplate and the same
for every build SHOULD be implicit.
-
Reduce parameters by moving configuration to input artifacts whenever
possible. For example, instead of passing in compiler flags via an external
parameter that has to be verified separately, require the
flags to live next to the source code or build configuration so that
verifying the latter automatically verifies the compiler flags.
-
In some cases, additional external parameters might exist that do not impact
the behavior of the build, such as a deadline or priority. These extra
parameters SHOULD be excluded from the provenance after careful analysis
that they indeed pose no security impact.
-
If possible, architect the build platform to use this definition as its
sole top-level input, in order to guarantee that the information is
sufficient to run the build.
-
When build configuration is evaluated client-side before being sent to the
server, such as transforming version-controlled YAML into ephemeral JSON,
some solution is needed to make verification practical. Consumers need a
way to know what configuration is expected and the usual way to do that is
to map it back to version control, but that is not possible if the server
cannot verify the configuration’s origins. Possible solutions:
-
(RECOMMENDED) Rearchitect the build platform to read configuration
directly from version control, recording the server-verified URI in
externalParameters
and the digest in resolvedDependencies
.
-
Record the digest in the provenance and use a separate
provenance attestation to link that digest back to version control. In
this solution, the client-side evaluation is considered a separate
“build” that SHOULD be independently secured using SLSA, though securing
it can be difficult since it usually runs on an untrusted workstation.
-
The purpose of resolvedDependencies
is to facilitate recursive analysis of
the software supply chain. Where practical, it is valuable to record the
URI and digest of artifacts that, if compromised, could impact the build. At
SLSA Build L3, completeness is considered “best effort”.
RunDetails
REQUIRED for SLSA Build L1: builder
Field | Type | Description
|
---|
builder
| Builder |
Identifies the build platform that executed the invocation, which is trusted to
have correctly performed the operation and populated this provenance.
|
metadata
| BuildMetadata |
Metadata about this particular execution of the build.
|
byproducts
| array (ResourceDescriptor) |
Additional artifacts generated during the build that are not considered
the “output” of the build but that might be needed during debugging or
incident response. For example, this might reference logs generated during
the build and/or a digest of the fully evaluated build configuration.
In most cases, this SHOULD NOT contain all intermediate files generated during
the build. Instead, this SHOULD only contain files that are likely to be useful
later and that cannot be easily reproduced.
|
Builder
REQUIRED for SLSA Build L1: id
Field | Type | Description
|
---|
id
| string (TypeURI) |
URI indicating the transitive closure of the trusted build platform. This is
intended
to be the sole determiner of the SLSA Build level.
If a build platform has multiple modes of operations that have differing
security attributes or SLSA Build levels, each mode MUST have a different
builder.id and SHOULD have a different signer identity. This is to minimize
the risk that a less secure mode compromises a more secure one.
The builder.id URI SHOULD resolve to documentation explaining:
- The scope of what this ID represents.
- The claimed SLSA Build level.
- The accuracy and completeness guarantees of the fields in the provenance.
- Any fields that are generated by the tenant-controlled build process and not
verified by the trusted control plane, except for the
subject .
- The interpretation of any extension fields.
|
builderDependencies
| array (ResourceDescriptor) |
Dependencies used by the orchestrator that are not run within the workload
and that do not affect the build, but might affect the provenance generation
or security guarantees.
|
version
| map (string→string) |
Map of names of components of the build platform to their version.
|
The build platform, or builder for short, represents the transitive
closure of all the entities that are, by necessity, trusted to faithfully run
the build and record the provenance. This includes not only the software but the
hardware and people involved in running the service. For example, a particular
instance of Tekton could be a build platform, while
Tekton itself is not. For more info, see Build
model.
The id
MUST reflect the trust base that consumers care about. How detailed to
be is a judgement call. For example, GitHub Actions supports both GitHub-hosted
runners and self-hosted runners. The GitHub-hosted runner might be a single
identity because it’s all GitHub from the consumer’s perspective. Meanwhile,
each self-hosted runner might have its own identity because not all runners are
trusted by all consumers.
Consumers MUST accept only specific signer-builder pairs. For example, “GitHub”
can sign provenance for the “GitHub Actions” builder, and “Google” can sign
provenance for the “Google Cloud Build” builder, but “GitHub” cannot sign for
the “Google Cloud Build” builder.
Design rationale: The builder is distinct from the signer in order to support
the case where one signer generates attestations for more than one builder, as
in the GitHub Actions example above. The field is REQUIRED, even if it is
implicit from the signer, to aid readability and debugging. It is an object to
allow additional fields in the future, in case one URI is not sufficient.
REQUIRED: (none)
Field | Type | Description
|
---|
invocationId
| string |
Identifies this particular build invocation, which can be useful for finding
associated logs or other ad-hoc analysis. The exact meaning and format is
defined by builder.id ; by default it is treated as opaque and case-sensitive.
The value SHOULD be globally unique.
|
startedOn
| string (Timestamp) |
The timestamp of when the build started.
|
finishedOn
| string (Timestamp) |
The timestamp of when the build completed.
|
Extension fields
Implementations MAY add extension fields to any JSON object to describe
information that is not captured in a standard field. Guidelines:
- Extension fields SHOULD use names of the form
<vendor>_<fieldname>
, e.g.
examplebuilder_isCodeReviewed
. This practice avoids field name collisions
by namespacing each vendor. Non-extension field names never contain an
underscore.
- Extension fields MUST NOT alter the meaning of any other field. In other
words, an attestation with an absent extension field MUST be interpreted
identically to an attestation with an unrecognized (and thus ignored)
extension field.
- Extension fields SHOULD follow the monotonic principle,
meaning that deleting or ignoring the extension SHOULD NOT turn a DENY
decision into an ALLOW.
Verification
Please see Verifying Artifacts for a detailed discussion of
provenance verification.
Index of build types
The following is a partial index of build type definitions. Each contains a
complete example predicate.
To add an entry here, please send a pull request on GitHub.
Migrating from 0.2
To migrate from version 0.2 (old
), use the following
pseudocode. The meaning of each field is unchanged unless otherwise noted.
{
"buildDefinition": {
// The `buildType` MUST be updated for v1.0 to describe how to
// interpret `inputArtifacts`.
"buildType": /* updated version of */ old.buildType,
"externalParameters":
old.invocation.parameters + {
// It is RECOMMENDED to rename "entryPoint" to something more
// descriptive.
"entryPoint": old.invocation.configSource.entryPoint,
// It is OPTIONAL to rename "source" to something more descriptive,
// especially if "source" is ambiguous or confusing.
"source": old.invocation.configSource.uri,
},
"internalParameters": old.invocation.environment,
"resolvedDependencies":
old.materials + [
{
"uri": old.invocation.configSource.uri,
"digest": old.invocation.configSource.digest,
}
]
},
"runDetails": {
"builder": {
"id": old.builder.id,
"builderDependencies": null, // not in v0.2
"version": null, // not in v0.2
},
"metadata": {
"invocationId": old.metadata.buildInvocationId,
"startedOn": old.metadata.buildStartedOn,
"finishedOn": old.metadata.buildFinishedOn,
},
"byproducts": null, // not in v0.2
},
}
The following fields from v0.2 are no longer present in v1.0:
entryPoint
: Use externalParameters[<name>]
instead.
buildConfig
: No longer inlined into the provenance. Instead, either:
- If the configuration is a top-level input, record its digest in
externalParameters["config"]
.
- Else if there is a known use case for knowing the exact resolved
build configuration, record its digest in
byproducts
. An example use
case might be someone who wishes to parse the configuration to look for
bad patterns, such as curl | bash
.
- Else omit it.
metadata.completeness
: Now implicit from builder.id
.
metadata.reproducible
: Now implicit from builder.id
.
Change history
v1.0
Major refactor to reduce misinterpretation, including a minor change in model.
- Significantly expanded all documentation.
- Altered the model slightly to better align with real-world build platforms,
align with reproducible builds, and make verification easier.
- Grouped fields into
buildDefinition
vs runDetails
.
- Renamed:
parameters
-> externalParameters
(slight change in semantics)
environment
-> internalParameters
(slight change in semantics)
materials
-> resolvedDependencies
(slight change in semantics)
buildInvocationId
-> invocationId
buildStartedOn
-> startedOn
buildFinishedOn
-> finishedOn
- Removed:
configSource
: No longer special-cased. Now represented as
externalParameters
+ resolvedDependencies
.
buildConfig
: No longer inlined into the provenance. Can be replaced
with a reference in externalParameters
or byproducts
, depending on
the semantics, or omitted if not needed.
completeness
and reproducible
: Now implied by builder.id
.
- Added:
- ResourceDescriptor:
annotations
, content
, downloadLocation
,
mediaType
, name
- Builder:
builderDependencies
and version
byproducts
- Changed naming convention for extension fields.
Differences from RC1 and RC2:
- Renamed
systemParameters
(RC1 + RC2) -> internalParameters
(final).
- Changed naming convention for extension fields (in RC2).
- Renamed
localName
(RC1) -> name
(RC2).
- Added
annotations
and content
(in RC2).
v0.2
Refactored to aid clarity and added buildConfig
. The model is unchanged.
- Replaced
definedInMaterial
and entryPoint
with configSource
.
- Renamed
recipe
to invocation
.
- Moved
invocation.type
to top-level buildType
.
- Renamed
arguments
to parameters
.
- Added
buildConfig
, which can be used as an alternative to configSource
to validate the configuration.
rename: slsa.dev/provenance
Renamed to “slsa.dev/provenance”.
v0.1.1
- Added
metadata.buildInvocationId
.
v0.1
Initial version, named “in-toto.io/Provenance”
Verification Summary Attestation (VSA)
Verification summary attestations communicate that an artifact has been verified
at a specific SLSA level and details about that verification.
This document defines the following predicate type within the in-toto
attestation framework:
"predicateType": "https://slsa.dev/verification_summary/v1"
Important: Always use the above string for predicateType
rather than what is
in the URL bar. The predicateType
URI will always resolve to the latest
minor version of this specification. See parsing rules for
more information.
Purpose
Describe what SLSA level an artifact or set of artifacts was verified at
and other details about the verification process including what SLSA level
the dependencies were verified at.
This allows software consumers to make a decision about the validity of an
artifact without needing to have access to all of the attestations about the
artifact or all of its transitive dependencies. They can use it to delegate
complex policy decisions to some trusted party and then simply trust that
party’s decision regarding the artifact.
It also allows software producers to keep the details of their build pipeline
confidential while still communicating that some verification has taken place.
This might be necessary for legal reasons (keeping a software supplier
confidential) or for security reasons (not revealing that an embargoed patch has
been included).
Model
A Verification Summary Attestation (VSA) is an attestation that some entity
(verifier
) verified one or more software artifacts (the subject
of an
in-toto attestation Statement) by evaluating the artifact and a bundle
of attestations against some policy
. Users who trust the verifier
may
assume that the artifacts met the indicated SLSA level without themselves
needing to evaluate the artifact or to have access to the attestations the
verifier
used to make its determination.
The VSA also allows consumers to determine the verified levels of
all of an artifact’s transitive dependencies. The verifier does this by
either a) verifying the provenance of each non-source dependency listed in
the resolvedDependencies of the artifact
being verified (recursively) or b) matching the non-source dependency
listed in resolvedDependencies
(subject.digest
==
resolvedDependencies.digest
and, ideally, vsa.resourceUri
==
resolvedDependencies.uri
) to a VSA for that dependency and using
vsa.verifiedLevels
and vsa.dependencyLevels
. Policy verifiers wishing
to establish minimum requirements on dependencies SLSA levels may use
vsa.dependencyLevels
to do so.
Schema
// Standard attestation fields:
"_type": "https://in-toto.io/Statement/v1",
"subject": [{
"name": <NAME>,
"digest": { <digest-in-request> }
}],
// Predicate
"predicateType": "https://slsa.dev/verification_summary/v1",
"predicate": {
"verifier": {
"id": "<URI>",
"version": {
"<COMPONENT>": "<VERSION>",
...
}
},
"timeVerified": <TIMESTAMP>,
"resourceUri": <artifact-URI-in-request>,
"policy": {
"uri": "<URI>",
"digest": { <digest-of-policy-data> }
}
"inputAttestations": [
{
"uri": "<URI>",
"digest": { <digest-of-attestation-data> }
},
...
],
"verificationResult": "<PASSED|FAILED>",
"verifiedLevels": ["<SlsaResult>"],
"dependencyLevels": {
"<SlsaResult>": <Int>,
"<SlsaResult>": <Int>,
...
},
"slsaVersion": "<MAJOR>.<MINOR>",
}
Parsing rules
This predicate follows the in-toto attestation parsing rules. Summary:
- Consumers MUST ignore unrecognized fields.
- The
predicateType
URI includes the major version number and will always
change whenever there is a backwards incompatible change.
- Minor version changes are always backwards compatible and “monotonic.” Such
changes do not update the
predicateType
.
- Producers MAY add extension fields using field names that are URIs.
Fields
NOTE: This subsection describes the fields within predicate
. For a description
of the other top-level fields, such as subject
, see Statement.
verifier
object, required
Identifies the entity that performed the verification.
The identity MUST reflect the trust base that consumers care about. How
detailed to be is a judgment call.
Consumers MUST accept only specific (signer, verifier) pairs. For example,
“GitHub” can sign provenance for the “GitHub Actions” verifier, and “Google”
can sign provenance for the “Google Cloud Deploy” verifier, but “GitHub” cannot
sign for the “Google Cloud Deploy” verifier.
The field is required, even if it is implicit from the signer, to aid readability and
debugging. It is an object to allow additional fields in the future, in case one
URI is not sufficient.
verifier.id
string (TypeURI), required
URI indicating the verifier’s identity.
verifier.version
map (string->string), optional
Map of names of components of the verification platform to their version.
timeVerified
string (Timestamp), optional
Timestamp indicating what time the verification occurred.
resourceUri
string (ResourceURI), required
URI that identifies the resource associated with the artifact being verified.
The resourceUri
SHOULD be set to the URI from which the producer expects the
consumer to fetch the artifact for verification. This enables the consumer to
easily determine the expected value when verifying. If the
resourceUri
is set to some other value, the producer MUST communicate the
expected value, or how to determine the expected value, to consumers through
an out-of-band channel.
policy
object (ResourceDescriptor), required
Describes the policy that the subject
was verified against.
The entry MUST contain a uri
identifying which policy was applied and
SHOULD contain a digest
to indicate the exact version of that policy.
inputAttestations
array (ResourceDescriptor), optional
The collection of attestations that were used to perform verification.
Conceptually similar to the resolvedDependencies
field in SLSA Provenance.
This field MAY be absent if the verifier does not support this feature.
If non-empty, this field MUST contain information on all the attestations
used to perform verification.
Each entry MUST contain a digest
of the attestation and SHOULD contains a
uri
that can be used to fetch the attestation.
verificationResult
string, required
Either “PASSED” or “FAILED” to indicate if the artifact passed or failed the policy verification.
verifiedLevels
array (SlsaResult), required
Indicates the highest level of each track verified for the artifact (and not
its dependencies), or “FAILED” if policy verification failed.
Users MUST NOT include more than one level per SLSA track. Note that each SLSA
level implies all levels below it (e.g. SLSA_BUILD_LEVEL_3
implies
SLSA_BUILD_LEVEL_2
and SLSA_BUILD_LEVEL_1
), so there is no need to
include more than one level per track.
dependencyLevels
object, optional
A count of the dependencies at each SLSA level.
Map from SlsaResult to the number of the artifact’s transitive dependencies
that were verified at the indicated level. Absence of a given level of
SlsaResult MUST be interpreted as reporting 0 dependencies at that level.
A set but empty dependencyLevels
object means that the artifact has no
dependency at all, while an unset or null dependencyLevels
means that the
verifier makes no claims about the artifact’s dependencies.
Users MUST count each dependency only once per SLSA track, at the highest
level verified. For example, if a dependency meets SLSA_BUILD_LEVEL_2
,
you include it with the count for SLSA_BUILD_LEVEL_2
but not the count for
SLSA_BUILD_LEVEL_1
.
slsaVersion
string, optional
Indicates the version of the SLSA specification that the verifier used, in the
form <MAJOR>.<MINOR>
. Example: 1.0
. If unset, the default is an
unspecified minor version of 1.x
.
Example
WARNING: This is just for demonstration purposes.
"_type": "https://in-toto.io/Statement/v1",
"subject": [{
"name": "out/example-1.2.3.tar.gz",
"digest": {"sha256": "5678..."}
}],
// Predicate
"predicateType": "https://slsa.dev/verification_summary/v1",
"predicate": {
"verifier": {
"id": "https://example.com/publication_verifier",
"version": {
"slsa-verifier-linux-amd64": "v2.3.0",
"slsa-framework/slsa-verifier/actions/installer": "v2.3.0"
}
},
"timeVerified": "1985-04-12T23:20:50.52Z",
"resourceUri": "https://example.com/example-1.2.3.tar.gz",
"policy": {
"uri": "https://example.com/example_tarball.policy",
"digest": {"sha256": "1234..."}
},
"inputAttestations": [
{
"uri": "https://example.com/provenances/example-1.2.3.tar.gz.intoto.jsonl",
"digest": {"sha256": "abcd..."}
}
],
"verificationResult": "PASSED",
"verifiedLevels": ["SLSA_BUILD_LEVEL_3"],
"dependencyLevels": {
"SLSA_BUILD_LEVEL_3": 5,
"SLSA_BUILD_LEVEL_2": 7,
"SLSA_BUILD_LEVEL_1": 1,
},
"slsaVersion": "1.0"
}
How to verify
VSA consumers use VSAs to accomplish goals based on delegated trust. We call the
process of establishing a VSA’s authenticity and determining whether it meets
the consumer’s goals ‘verification’. Goals differ, as do levels of confidence
in VSA producers, so the verification procedure changes to suit its context.
However, there are certain steps that most verification procedures have in
common.
Verification MUST include the following steps:
-
Verify the signature on the VSA envelope using the preconfigured roots of
trust. This step ensures that the VSA was produced by a trusted producer
and that it hasn’t been tampered with.
-
Verify the statement’s subject
matches the digest of the artifact in
question. This step ensures that the VSA pertains to the intended artifact.
-
Verify that the predicateType
is
https://slsa.dev/verification_summary/v1
. This step ensures that the
in-toto predicate is using this version of the VSA format.
-
Verify that the verifier
matches the public key (or equivalent) used to
verify the signature in step 1. This step identifies the VSA producer in
cases where their identity is not implicitly revealed in step 1.
-
Verify that the value for resourceUri
in the VSA matches the expected
value. This step ensures that the consumer is using the VSA for the
producer’s intended purpose.
-
Verify that the value for slsaResult
is PASSED
. This step ensures the
artifact is suitable for the consumer’s purposes.
-
Verify that verifiedLevels
contains the expected value. This step ensures
that the artifact is suitable for the consumer’s purposes.
Verification MAY additionally contain the following step:
- (Optional) Verify additional fields required to determine whether the VSA
meets your goal.
Verification mitigates different threats depending on the VSA’s contents and the
verification procudure.
IMPORTANT: A VSA does not protect against compromise of the verifier, such as by
a malicious insider. Instead, VSA consumers SHOULD carefully consider which
verifiers they add to their roots of trust.
Examples
-
Suppose consumer C wants to delegate to verifier V the decision for whether
to accept artifact A as resource R. Consumer C verifies that:
-
The signature on the VSA envelope using V’s public signing key from their
preconfigured root of trust.
-
subject
is A.
-
predicateType
is https://slsa.dev/verification_summary/v1
.
-
verifier.id
is V.
-
resourceUri
is R.
-
slsaResult
is PASSED
.
-
verifiedLevels
contains SLSA_BUILD_LEVEL_UNEVALUATED
.
Note: This example is analogous to traditional code signing. The expected
value for verifiedLevels
is arbitrary but prenegotiated by the producer and
the consumer. The consumer does not need to check additional fields, as C
fully delegates the decision to V.
-
Suppose consumer C wants to enforce the rule “Artifact A at resource R must
have a passing VSA from verifier V showing it meets SLSA Build Level 2+.”
Consumer C verifies that:
-
The signature on the VSA envelope using V’s public signing key from their
preconfigured root of trust.
-
subject
is A.
-
predicateType
is https://slsa.dev/verification_summary/v1
.
-
verifier.id
is V.
-
resourceUri
is R.
-
slsaResult
is PASSED
.
-
verifiedLevels
is SLSA_BUILD_LEVEL_2
or SLSA_BUILD_LEVEL_3
.
Note: In this example, verifying the VSA mitigates the same threats as
verifying the artifact’s SLSA provenance. See
Verifying artifacts for details about which
threats are addressed by verifying each SLSA level.
SlsaResult (String)
The result of evaluating an artifact (or set of artifacts) against SLSA.
SHOULD be one of these values:
SLSA_BUILD_LEVEL_UNEVALUATED
SLSA_BUILD_LEVEL_0
SLSA_BUILD_LEVEL_1
SLSA_BUILD_LEVEL_2
SLSA_BUILD_LEVEL_3
FAILED
(Indicates policy evaluation failed)
Note that each SLSA level implies the levels below it in the same track.
For example, SLSA_BUILD_LEVEL_3
means (SLSA_BUILD_LEVEL_1
+
SLSA_BUILD_LEVEL_2
+ SLSA_BUILD_LEVEL_3
).
Users MAY use custom values here but MUST NOT use custom values starting with
SLSA_
.
Change history
- 1.1:
- Changed the
policy
object to recommend that the digest
field of
the ResourceDescriptor
is set.
- Added optional
verifier.version
field to record verification tools.
- Added Verification subsection with examples.
- Made
timeVerified
optional.
- 1.0:
- Replaced
materials
with resolvedDependencies
.
- Relaxed
SlsaResult
to allow other values.
- Converted to lowerCamelCase for consistency with SLSA Provenance.
- Added
slsaVersion
field.
- 0.2:
- Added
resource_uri
field.
- Added optional
input_attestations
field.
- 0.1: Initial version.