Cyber Resilience Act (CRA) – overview

The Cyber Resilience Act (CRA) was adopted by the European Council on October 10, 2024. It was then published in the Official journal of the EU on November 20, 2024 and enters into force today, December 10, 2024. Most of the law will start applying in 3 years, on December 11, 2027.

However, the obligation for manufacturers to report any actively exploited vulnerability or any security incident impacting the security of their product to ENISA will apply from September 11, 2026.
The other parts of the law that will start applying from June 11, 2026 apply to the member states and specify the details of setting up the administrative entities that will assess conformity with the CRA.

At Bootlin, we have been paying close attention to this topic for several reasons. First, the CRA will affect a large number of our clients, as almost every embedded device sold in the EU will need to comply with it. Second, the CRA also affects us directly, for instance as the maintainer of Snagboot.

This post is therefore the first in a series that will present our understanding of the CRA, and clearly lay out what one needs to have in mind in order to be confident of one’s compliance on time.

Am I directly impacted by the CRA?

In short, if you are one of the entities in the chain of electronic hardware with some type of connectivity or software being designed, written/manufactured, marketed, rented/sold or even provided for free in the context of a commercial activity in the European Union, then yes, you are very likely directly impacted by the CRA in some capacity. We’ll see how in this series of posts.

Page 9:

The proposed Regulation will apply to all products with digital elements whose intended and reasonably foreseeable use includes a direct or indirect logical or physical data connection to a device or network.

The CRA is concerned with:

  • products with digital elements, so a typewriter wouldn’t be impacted
  • that have some type of connectivity (network, bluetooth, but also USB or other cables), so a “dumb” electronic watch shouldn’t be impacted
  • being placed onto the internal EU market: if you build your own keyboard but don’t sell it or only outside of the EU, you need not worry about the CRA.

Large classes of devices are excluded from the CRA, because they are covered by other regulations:

  • Medical devices for human use are covered in 2017/745 and 2017/746
  • Motor vehicles and their trailers/ systems need to comply with 2019/2144
  • Civil aviation systems are the object of 2018/1139
  • National security, military and classified information systems are exempt from the CRA as long as they exclusively target these applications

Although the term “product” evokes hardware, the text of the CRA makes it very clear that it is intended to apply to both hardware and software.

As outlined in the regulation, it does not apply to Cloud services (SaaS) in general. However, it does apply to such services that are integral to the product functioning:

It does not regulate services, such as Software-as-a-Service (SaaS), except for remote data processing solutions relating to a product with digital elements understood as any data processing at a distance for which the software is designed and developed by the manufacturer of the product concerned or under the responsibility of that manufacturer, and the absence of which would prevent such a product with digital elements from performing one of its functions.

An exemption is explicitly provisioned for presentation and use in “trade fairs, exhibitions and demonstrations or similar events“, as well as for unfinished software, as long as it is:

  • only available for required testing for a limited time
  • clearly marked as such and as non-compliant with the CRA

Stricter obligations apply to the following products:

  • important products as defined in Annex III
    Class I of this category applies to a rather large list, including routers, switches, VPNs, SIEMs, antivirus software, smart home assistants, door locks, security cameras, OSes, bootloaders, password managers, microprocessors – microcontrollers – ASICs – FPGAs with security-related functionalities, network interfaces, toys that record voice or video or have location tracking, and wearable devices with health monitoring or used by children
    Class II includes hypervisors, container runtime systems, firewalls, IPS, IDS and tamper-resistant micoprocessors and microcontrollers.
  • critical products as defined in Annex IV:
  • Hardware Devices with Security Boxes, which is a technical domain defined in the EUCC and refers to systems where “security relies on a physical envelope […] that is designed to resist direct attacks, e.g. payment terminals, tachograph vehicle units, smart meters, access control terminals and Hardware Security Modules”
  • Smart meter gateways
  • smartcards and devices with secure elements
  • high-risk AI systems as defined in Article 6 of the AI Regulation

These systems will need to go through external assessment, depending on the exact category in which they fit.

What do I need to do and when?

The CRA applies to products as soon as they are placed on the EU market, meaning that compliance to the CRA should be part of the launch plan. Several provisions of the law apply to the design phase, and writing the documents that will be necessary for compliance will be easier to do as design advances than all at the end. The law is intended to have impact on technical design decisions made in the implementation of the product.

Ideally, design decisions and tests will be documented throughout  design and implementation then be compiled into the technical documentation that must be provided when the product launches.

The content and timeline of tasks and requirements depends on the role one fulfills out of the ones detailed below and in the CRA’s Article 3. The very first step in knowing what one needs to do to comply with the CRA is therefore to determine what role one does or will fulfill for every product.

  • Manufacturer: either the actual developer/manufacturer of the product, or the person/entity marketing it
  • Authorized representative: person or entity within the EU who received a written mandate to act on behalf of the manufacturer
  • Importer: if the manufacturer is outside of the EU, that is the entity placing the product on the market
  • Distributor: entity other than manufacturer or importer who places the product on the market
  • Open-Source software steward: entity systematically contributing to or supporting specific open-source software intended for commercial activity

Obligations apply at several stages. Initial obligations apply from the conception stage, from the moment the product is being designed. These initial obligations mostly consist of:

  • Taking security into account when designing and implementing the product
  • Documenting the risks that were taken into account and their mitigations
  • Preparing for the continuous obligations by designing the processes that will be used e.g. for responsibly handling vulnerabilities in the product

The continuous obligations apply starting when the product is placed on the market throughout its lifetime, and refer to a concept known as the support period, which will be detailed in the later post for manufacturers. In short, they require:

  • Updating the documentation produced to meet the initial obligations
  • Making that documentation available to users and authorities
  • Responsibly handling vulnerabilities
  • Reporting vulnerabilities known to be exploited as well as security incidents

The last stage is when the entity behind the product stops operating. The final obligation is to make authorities and users aware of this cessation of operations.

This is a broad overview, and there is of course more to discuss, which is why subsequent posts in this series (stay tuned!) will go into the details for manufacturers, who bear most of the responsibilities, and for open-source software stewards, as they are a slightly special case, and one whose criteria Bootlin and many in the embedded community fit.
This post will still go into reporting obligations, as they apply broadly, and will be the first to apply outside of conformity assessment bodies (not exactly the target audience of this series), in September 2026.

Reporting – ENISA and users

Starting September 11, 2026, all roles in the CRA are, under certain conditions for importers, distributors and representatives and always for manufacturers and open-source stewards, under obligation to report:

  • any vulnerability in the software that is known to be exploited
  •  severe incidents that impact the security of the software

Both must be reported to the single reporting platform that ENISA has been charged with creating in Article 16.
This single reporting platform mentioned above does not exist yet. It is the responsibility of ENISA, and should ultimately have one endpoint for each member state’s Computer Security Incident Response Team (CSIRT).
The list of those CSIRTs should theoretically be available at https://csirtsnetwork.eu/homepage, but appears to be currently not working. ENISA’s CNW can help with an educated guess: in the case of France, for instance, the CSIRT is the CERT-FR. Today, reporting incidents or vulnerabilities to the CERT-FR must be done by encrypted email, as indicated on this page (french). But ultimately, they should own an endpoint of ENISA’s reporting platform.

The CRA specifies what it means by “severe” incidents. That includes events that could negatively affect the software’s ability to protect the CIA triad: Confidentiality, Integrity, Authenticity of “sensitive or important data or functions” or even their availability. This will most likely be the case of any security incident.

The CRA explicitly addresses the introduction of malicious code into the software as a severe incident. The xz backdoor discovered earlier this year would be a prime example.

It is also specific as to how this reporting should happen:

  • A first notification should be published as soon as possible, and before 24h of the entity becoming aware. For an incident, it should include whether it is suspected to have been caused by malicious actions. For a vulnerability, it should include the member states where the product is known to be used.
  • As soon as possible and before 72h, an initial assessment and mitigations should be provided.
  • As soon as possible and within one month after the initial assessment above, the entity should publish a final report detailing the incident or vulnerability, its impact, the root cause and the mitigations.
    For a vulnerability, it should also include information about any malicious actor known to have exploited or be exploiting it.

Reports after the first are only mandatory if they would add information.
In the case of an incident which impact, root cause and mitigations are all known within 24h, only one notification need be sent

Conclusion

This first post should have given you some insights as to where you stand regarding the CRA. Though it might still be unclear what exactly you need to be doing, you should have a rough idea of what to expect and when to expect it.
The next post will go into the details of what manufacturers will need to do.

OP-TEE support for Microchip SAMA5D2 processors

Over the past few years, we have been contributing to the OP-TEE project the support for the Microchip SAMA5D2 processor family, and we also helped with enabling OP-TEE on the Microchip SAMA7 processor family.

In this blog post, we propose a quick introduction to OP-TEE and then detail a few changes that have been
done to add support for Microchip SAMA5 processors to the OP-TEE project.

Continue reading “OP-TEE support for Microchip SAMA5D2 processors”

Bootlin at Open Source Experience and SIDO in Paris, Dec 6-7

Paris will be hosting next week-end a combined event composed of the Open Source Experience and SIDO, the first dedicated to open-source technologies, and the second to IoT, AI, digital infrastructure and cybersecurity.

Open Source Experience

Thomas Petazzoni, Bootlin CEO, will be representing Bootlin at these events, and will also be participating to the round table Embedded systems security: a technical and organizational approach on December 7, at 2:30 PM UTC+1. The abstract of the round table is:

Security is a major issue. Embedded systems are increasingly complex and connected, making them more vulnerable. The aim of this round table is to discuss best practices for guaranteeing security

Thomas will be speaking with Daniel Fages (Freelance), Eloi Bail (Savoir Faire Linux) and Jean-Charles Verdié (Canonical), and the round table will be moderated by Cédric Ravalec (Smile).

If you’re interested in discussing career, business or partnership opportunities with Bootlin, do not hesitate to contact Thomas Petazzoni ahead of the event to schedule a meeting.

Slides and videos of Bootlin talks at Live Embedded Event #2

The second edition of Live Embedded Event took place on June 3rd, exactly 6 months after the first edition. Even though there were a few issues with the online platform, it was once again great to learn new things about embedded, and share some of the work we’ve been doing at Bootlin on various topics. For the next edition, we plan to switch to a different online platform, hopefully providing a better experience.

But in the mean time, all videos of the event have been posted on the Youtube Channel of the event. The talks from Bootlin have been posted on Bootlin’s Youtube Channel.

Indeed, in addition to being part of the organization committee, Bootlin prepared and delivered 5 talks as part of Live Embedded Event, covering different topics we have worked on in the recent months for our customers.

Understanding U-Boot Falcon Mode and adding support for new boards, Michael Opdenacker

Slides [PDF]

Introduction to RAUC, Kamel Bouhara

Slides [PDF]

Security vulnerability tracking tools in Buildroot, Thomas Petazzoni

Slides [PDF]

Secure boot in embedded Linux systems, Thomas Perrot

Slides [PDF]

Device Tree overlays and U-boot extension board management, Köry Maincent

Slides [PDF]

Measured boot with a TPM 2.0 in U-Boot

A Trusted Platform Module, in short TPM, is a small piece of hardware designed to provide various security functionalities. It offers numerous features, such as storing secrets, ‘measuring’ boot, and may act as an external cryptographic engine. The Trusted Computing Group (TCG) delivers a document called TPM Interface Specifications (TIS) which describes the architecture of such devices and how they are supposed to behave as well as various details around the concepts.

These TPM chips are either compliant with the first specification (up to 1.2) or the second specification (2.0+). The TPM2.0 specification is not backward compatible and this is the one this post is about. If you need more details, there are many documents available at https://trustedcomputinggroup.org/.

Picture of a TPM wired on an EspressoBin
Trusted Platform Module connected over SPI to Marvell EspressoBin platform

Among the functions listed above, this blog post will focus on the measured boot functionality.

Measured boot principles

Measuring boot is a way to inform the last software stage if someone tampered with the platform. It is impossible to know what has been corrupted exactly, but knowing someone has is already enough to not reveal secrets. Indeed, TPMs offer a small secure locker where users can store keys, passwords, authentication tokens, etc. These secrets are not exposed anywhere (unlike with any standard storage media) and TPMs have the capability to release these secrets only under specific conditions. Here is how it works.

Starting from a root of trust (typically the SoC Boot ROM), each software stage during the boot process (BL1, BL2, BL31, BL33/U-Boot, Linux) is supposed to do some measurements and store them in a safe place. A measure is just a digest (let’s say, a SHA256) of a memory region. Usually each stage will ‘digest’ the next one. Each digest is then sent to the TPM, which will merge this measurement with the previous ones.

The hardware feature used to store and merge these measurements is called Platform Configuration Registers (PCR). At power-up, a PCR is set to a known value (either 0x00s or 0xFFs, usually). Sending a digest to the TPM is called extending a PCR because the chosen register will extend its value with the one received with the following logic:

PCR[x] := sha256(PCR[x] | digest)

This way, a PCR can only evolve in one direction and never go back unless the platform is reset.

In a typical measured boot flow, a TPM can be configured to disclose a secret only under a certain PCR state. Each software stage will be in charge of extending a set of PCRs with digests of the next software stage. Once in Linux, user software may ask the TPM to deliver its secrets but the only way to get them is having all PCRs matching a known pattern. This can only be obtained by extending the PCRs in the right order, with the right digests.

Linux support for TPM devices

A solid TPM 2.0 stack has been around for Linux for quite some time, in the form of the tpm2-tss and tpm2-tools projects. More specifically, a daemon called resourcemgr, is provided by the tpm2-tss project. For people coming from the TPM 1.2 world, this used to be called trousers. One can find some commands ready to be used in the tpm2-tools repository, useful for testing purpose.

From the Linux kernel perspective, there are device drivers for at least SPI chips (one can have a look there at files called tpm2*.c and tpm_tis*.c for implementation details).

Bootlin’s contribution: U-Boot support for TPM 2.0

Back when we worked on this topic in 2018, there was no support for TPM 2.0 in U-Boot, but one of customer needed this support. So we implemented, contributed and upstreamed to U-Boot support for TPM 2.0. Our 32 patches patch series adding TPM 2.0 support was merged, with:

  • SPI TPMs compliant with the TCG TIS v2.0
  • Commands for U-Boot shell to do minimal operations (detailed below)
  • A test framework for regression detection
  • A sandbox TPM driver emulating a fake TPM

In details, our commits related to TPM support in U-Boot:

Details of U-Boot commands

Available commands for v2.0 TPMs in U-Boot are currently:

  • STARTUP
  • SELF TEST
  • CLEAR
  • PCR EXTEND
  • PCR READ
  • GET CAPABILITY
  • DICTIONARY ATTACK LOCK RESET
  • DICTIONARY ATTACK CHANGE PARAMETERS
  • HIERARCHY CHANGE AUTH

With this set of functions, minimal handling is possible with the following sequence.

First, the TPM stack in U-Boot must be initialized with:

> tpm init

Then, the STARTUP command must be sent.

> tpm startup TPM2_SU_CLEAR

To enable full TPM capabilities, one must request to continue the self tests (or do them all again).

> tpm self_test full
> tpm self_test continue

This is enough to pursue measured boot as one just need to extend the PCR as needed, giving 1/ the PCR number and 2/ the address where the digest is stored:

> tpm pcr_extend 0 0x4000000

Reading of the extended value is of course possible with:

> tpm pcr_read 0 0x4000000

Managing passwords is about limiting some commands to be sent without previous authentication. This is also possible with the minimum set of commands recently committed, and there are two ways of implementing it. One is quite complicated and features the use of a token together with cryptographic derivations at each exchange. Another solution, less invasive, is to use a single password. Changing passwords was previously done with a single TAKE OWNERSHIP command, while today a CLEAR must precede a CHANGE AUTH. Each of them may act upon different hierarchies. Hierarchies are some kind of authority level and do not act upon the same commands. For the example, let’s use the LOCKOUT hierarchy: the locking mechanism blocking the TPM for a given amount of time after a number of failed authentications, to mitigate dictionary attacks.

> tpm clear TPM2_RH_LOCKOUT [<pw>]
> tpm change_auth TPM2_RH_LOCKOUT <new_pw> [<old_pw>]

Drawback of this implementation: as opposed to the token/hash solution, there is no protection against packet replay.

Please note that a CLEAR does much more than resetting passwords, it entirely resets the whole TPM configuration.

Finally, Dictionary Attack Mitigation (DAM) parameters can also be changed. It is possible to reset the failure counter (aka. the maximum number of attempts before lockout) as well as to disable the lockout entirely. It is possible to check the parameters have been correctly applied.

> tpm dam_reset [<pw>]
> tpm dam_parameters 0xffff 1 0 [<pw>]
> tpm get_capability 0x0006 0x020e 0x4000000 4

In the above example, the DAM parameters are reset, then the maximum number of tries before lockout is set to 0xffff, the delay before decrementing the failure counter by 1 and the lockout is entirely disabled. These parameters are for testing purpose. The third command is explained in the specification but basically retrieves 4 values starting at capability 0x6, property index 0x20e. It will display first the failure counter, followed by the three parameters previously changed.

Limitation

Although TPMs are meant to be black boxes, U-Boot current support is too light to really protect against replay attacks as one could spoof the bus and resend the exact same packets after taking ownership of the platform in order to get these secrets out. Additional developments are needed in U-Boot to protect against these attacks. Additionally, even with this extra security level, all the above logic is only safe when used in the context of a secure boot environment.

Conclusion

Thanks to this work from Bootlin, U-Boot has basic support for TPM 2.0 devices connected over SPI. Do not hesitate to contact us if you need support or help around TPM 2.0 support, either in U-Boot or Linux.