Back from the Embededded Linux Conference: selection of talks #1

As we wrote in a previous blog post, 11 engineers from Bootlin attended the Embedded Linux Conference in Seattle in April. We have a tradition after such an event to share with you a selection of talks that we have found useful. In order to achieve this, we ask each of our engineers who participated to the conference to pick one talk they would like to highlight, and write a short summary/feedback about the talk. In this first installment of this series of blog posts, we’ll share our selection of 4 first talks.

Talk Real-Time Linux: What Is Next?, by Daniel Bristot de Oliveira, Sebastian Siewior, Kate Stewart

Talk chosen by Bootlin engineer Maxime Chevallier, who is also the author and trainer of our Real-time Linux with PREEMPT_RT training course.

As with every release, the Preempt-RT patchset is closer and closer to being fully merged in to the mainline tree. While there are still some issues left to be addressed, namely printk, the panel discussion was a good opportunity to discuss what’s next. What can we expect when Preempt-RT is merged, and what are the next steps going to be ?

Over the years, Preempt-RT and its developer community have used this cozy corner of the Linux kernel to work on new, advanced and complex features that eventually made it into the mainline kernel. Things aren’t going to change much on that regard, efforts to improve scheduling, or lately, RT-throttling, are now being made in the mainline tree.

Another point that was brought-up is the continuing need to educate users on how to use Preempt-RT. For people who need a Real-Time linux-based operating system, using Preempt-RT is just the beginning of the journey, most of the work really is about knowing how the entire system behaves in real-life conditions, and avoid latencies coming from the hardware, the kernel, or user applications. This won’t change much after Preempt-RT lands mainline, the need for education stays the same. Some propositions made to improve on the realtime linux wiki were made, to use it as a centralized place to share configuration examples and best practices.

Tools such as rtla have however made it much easier for users to gather and share information on the issues they encounter, this tool can be expected to gain more and more traction, and in turn help users get familiar with Real-Time debugging with more ease.

We’re all very excited to see the hard work of the Preempt-RT developers make it into mainline Linux, but it was clear that this is going to be just another step in the long journey of the Real-Time Linux project.

Compound Interest – Dealing with Two Decades of Technical Debt in Embedded Linux, by Bartosz Golaszewski, Linaro

Talk chosen by Bootlin engineer Luca Ceresoli

In this talk with some humor and lots of serious considerations, Bartosz discussed the high cost of “technical debt”, i.e. dealing with the cost of suboptimal solutions used in software development and the accumulation of workarounds and hacks over time to circumvent the effort of fixing the initial design.

By implementing a suboptimal software component a developer borrows time, which will have to be paid back when using this suboptimal software component (this is “technical debt”, compared to “financial debt”). Later on one can add workarounds and hacks instead of fixing the original design flaw, thus accumulating “compound interest”. The problem is much worse when the suboptimal design touches programming interfaces (especially the user space APIs!).

The problem with APIs is that after they are added people will inevitably start using them in new ways, some which were not even considered initially. This makes future restructuring more and more complex as none of the use cases should be broken.

In the Linux kernel this usually happens when a subsystem has a lot of users, so modifying its internal API is extremely costly. Another common reason is that people implementing a new subsystem tend to start by copying an existing subsystem, thus copying the same errors – and duplicating technical debt. Bartosz finds the lack of a good deprecation mechanism in the Linux kernel exacerbates this problem.

The GPIO subsystem, which Bartosz maintains, is a good example. It has a long history, predating device tree and the device driver model, it has lots of providers and many more users, finally it started as a very naïve set of function prototypes and evolved over time. It provides a number-based sysfs interface that is very problematic, on top of which a new, better interface has been developed, called gpiod.

The large variety of GPIO users (some in atomic context) and GPIO providers (some which may sleep) created a long-standing issue as serialization of calls to the GPIO subsystem appeared close to unfeasible. However Bartosz found a good solution using SRCU, which has been merged in v6.9. One part of the technical debt has been paid back, finally.

The debt that is the hardest to pay back is the one originating from user space APIs, which just cannot be removed for backward compatibility. People have been long encouraged to use the new gpiod interface but some still complain that it has no easy way to set a GPIO output persistently. Bartosz is working on a solution to this by implementing a D-Bus protocol to set GPIOs and a related daemon that is able to persistently keep GPIOs set. Bartosz will be very happy if people test it and report their findings! Once finished, this will leave no excuses to switch to the gpiod user API.

Overall we found this talk enlightening about how bad programming design creates technical debt, the difficulty to solve it and what to do to minimize it in the first place.

How (Not) to Get the Vendor Driver Merged Upstream, by Dmitry Baryshkov, Linaro

Talk chosen by Bootlin engineer Bastien Curutchet

In this talk Dmitry Baryshkov presented the fundamentals of a good contribution to the Linux kernel.

He started with the classic steps to fulfill when you want to contribute and mention the checks that have to be made before submitting a patch as:

  • ./script/checkpatch.pl to check code style compliance
  • make dtbs_check and make dt_bindings_check when changes are made in Device Tree bindings
  • make W=1 to fix all warnings

Then, Dmitry presented the main b4 commands that can be used to prepare and send your patches upstream. He also gave advice about the patch series contents: kernel must compile and work after application of each patch; it is better to split a big patch into small ones and a big patch series into smaller ones; commit log must describe why the introduced change is needed. Actually, Krzysztof Kozlowski also insisted on this last point in another good talk: giving context to maintainers help them during their review.

Finally he talked about the good and bad behavior habits when contributing. His first advice is ‘Don’t fear asking questions‘, if there is something that you don’t understand in a reviewer’s comment or question: don’t ignore it and ask for details.
The second thing to do is to leave people time to review. It is not needed to send a new iteration right after the first comment you get. It’s better to leave others time to add more comments (or even debate about some points) before sending your next iteration. The golden rule would be: never do more than one iteration per day, even one per week for a big patchset. According to him, if you don’t get any answer at all during two weeks, it’s ok to resend your patch series or to ping the maintainer.

As a conclusion, Dmitry pointed out that the first contribution is the hardest, and things will get easier and easier with following contributions.

As a fairly new contributor to the Linux kernel, I really appreciated this talk because Dmitry went through every question I asked myself and my colleagues before and during my first contribution so I’d say this talk can be really useful for someone who would like to start contributing to the Linux kernel project.

Talk Maximizing SD Card Life, Performance, and Monitoring with KrillKounter, by Andrew Murray

Talk chosen by Bootlin engineer João Marcos Costa

The apparent unreliability of SD Cards is a usual complaint from customers when they find that data has been corrupted, or that the throughput announced by the manufacturer is far from reality.

In this talk, Andrew Murray starts with an overview of what a typical SD Card contains: the card’s connectors, a microcontroller (the SD Controller), and a NAND memory chip. However, as he points out, NAND is fundamentally unreliable.

Such unreliability comes mainly from the degradation of the semiconductor that composes the Floating-gate MOSFET. This transistor is the base for both NAND and NOR Flash, each one with inherently different levels of granularity for writing operations. The talk focuses on NAND Flash, as it can be found everywhere: SD Cards, eMMC, SSD, USB sticks, etc.

NAND’s access limitations are a consequence of its write granularity: a whole block of pages need to be erased so you can write into a single page. The presentation illustrates with an example where a 3×3 block (i.e., with 9 pages) needs to write into its first page. A first approach would be writing an entirely new block: the “new” first page would be written into this block, and the previous eight pages would be copied from the old block. The drawback here is called write amplification: we ended up with 9 write operations instead of only one.

The lifetime of NAND Flash is measured in Program/Erase cycles (P/E cycle), and this value changes according to the memory cell architecture: SLC (Single-level cell, 1 bit per cell) has an approximate lifespan of 10000 P/E cycles, MLC (Multi-level cell, 2 bits per cell) and TLC (Triple-level cell, 3 bits per cell) have approximative lifespans of 3000 P/E and 1000 P/E respectively.

One of the talk’s main points is the experimental testing exploring the lifetime of 4 regular 8Gb MLC consumer SD Cards and writing to them until they fail. Both sequential and random writes were used. As for the sequential writes, the block size used was 512Kb. As for the random writes, three different values were used: 4Kb, 128Kb and 512Kb. The overall conclusions were that sequential writes have a fairly higher throughput and lower degradation. For the random writes, the larger the block size, the higher the throughput and the lower the degradation levels. Overall, the conclusion is that sequential large accesses are usually better. This whole experiment was tracked with the Open Source daemon KrillKounter, responsible for logging block layer statistics and allowing to determine the wear on a SD Card.

The talk was particularly instructive, as it starts from a practical overview of SD Cards and NAND Flash, then dives for a moment in electronics specifics to explain how the oxide degradation works, and finally presents us with an experiment to confront theoretical values of the SD Cards lifespan. As a bonus, it provides us with KrillKounter, a tool to analyze the wear on SD Cards.

Thomas Petazzoni

Author: Thomas Petazzoni

Thomas Petazzoni is Bootlin's co-owner and CEO. Thomas joined Bootlin in 2008 as a kernel and embedded Linux engineer, became CTO in 2013, and co-owner/CEO in 2021. More details...

Leave a Reply