The Open Source Summit Europe took place about a month ago in Vienna, and a large part of the Bootlin team attended the event, at which we also gave two talks.
At Bootlin, after such conferences, we also have a tradition of highlighting a number of talks we found interesting, and sharing this selection with our readers: we have asked each Bootlin engineer who attended Open Source Summit Europe 2024 to pick one talk they liked, and share a summary.
You’ll find in this first blog post a first selection of 3 talks: with their videos being available, this gives you the ideal playlist for the upcoming cold and rainy evenings!
Give Me Back My GPIO Persistence! (Introducing the libgpiod GPIO-Manager) by Bartosz Golaszewski
Talk chosen by Bootlin engineer Hervé Codina
Since the migration from sysfs to libgiod, GPIO value persistence was no longer guaranteed when the GPIO file descriptor is closed. The persistence feature available using the sysfs interface is used by many user-space users of GPIO: for instance setting a GPIO from a script run on some user specific events. The lack of persistence support in libgpio leads people to keep using the deprecated sysfs interface. The support for persistence was requested for a long time and is now available: it is implemented thanks to a new user-space daemon gpio-manager
that keeps the GPIO file descriptor open and provides an API to allow other processes to interact with the GPIO.
Bartosz presented the gpio-manager
daemon in his talk. He started by giving details about differences between the old sysfs interface and the current GPIO char device interface and explained why persistence was an issue. The then introduced de daemon giving some details related to its internal design and, in particular, the API based on D-Bus.
He presented a client of this daemon, the gpiocli
program which uses the D-Bus API to interact with the daemon and gave some examples of possible interactions. The support for D-Bus, gpio-manager
and gpiocli
should be available in the next libgpiod release. The source code is already available in the master branch of the libgpiod git repository.
This support for the GPIO persistence with libgpiod and the API in gpio-manager
to interact with GPIOs is a great step forward. This fills a hole in available GPIO features when the user-space interacts with GPIO and makes the sysfs interface even more deprecated. This should encourage people to use more and more libgpiod instead of the deprecated sysfs interface.
Using Yocto to debug embedded devices crashes, by Etienne Cordonnier, Snap Inc
Talk chosen by Bootlin engineer Mathieu Dubois-Briand
While using Yocto, several possibilities are offered to debug crashes: Etienne Cordonnier’s talk has given an overall description of these possibilities.
As a first possibility, one might want to use binaries with debug symbols. Embedding all symbols in the image is generally impractical, as it tends to increase its size by an order of magnitude, but other possibilities exist. First, debuginfod
allows gdb to fetch symbols from a remote server, removing the need to embed symbols in the image. Alternatively, Minidebuginfo can be used: symbols are shrunk so they only contain a subset of information, such as function names, increasing the image size only by a tiny factor. Finally, a companion debug filesystem might be generated, allowing to provide the symbols to gdb in a setup using gdbserver. The first two methods are supported by Yocto thanks to the debuginfod
and minidebuginfo
distro features, and the last one thanks to the IMAGE_GEN_DEBUGFS
variable.
Another useful debug method is using core dumps generated on user-space software crashes. These core dumps might be managed by systemd when the corresponding packageconfig
value is set, and retrieved using coredumpctl
. Yocto support suffers from some caveats, but fixes made by the presenter are on their way to the mainline.
The third method shown is useful to debug kernel crashes, with kernel dumps (kdump). Basically, when a kernel panic occurs, a second kernel is booted using kexec in a memory section reserved on boot. This leaves the existing memory area untouched, so it can be copied for later analysis. There is no particular support needed on the Yocto side: one can use this feature by enabling the required kernel configuration and adding related tools to the image.
Mesa3D Unveiled: From glDrawArrays(…) to GPU Magic by Christian Gmeiner
Talk chosen by Bootlin engineer Grégory Clement
The graphical stack is a complex piece of software and this talk provided a good introduction to understand how it works. In this presentation, Christian Gmeiner did not only focus on Mesa3D but he also gave some context. Here is a raw summary of what was exposed, but seeing the talk will give more details.
First, he explained why Mesa 3D was crucial for embedded systems, which very often have long-term support requirements (10-20 years). Closed-source GPU drivers make it difficult to update and secure these systems. Mesa 3D, as an open-source solution, provides more control and flexibility, making it easier to meet these long-term support goals.
Then he gave a brief overview of how GPUs worked, explaining the concept of jobs, fences, and buffers. He also explained the difference between CPUs and GPUs, highlighting that GPUs were designed for specific operations in parallel mode.
After this, he spoke about the core of his talk: describing Mesa 3D’s architecture, which includes:
- Gallium: A middle-ground abstraction layer that exposes all Graphics Hardware services in a straightforward manner.
- State Tracker: Translate the Mesa core API and function into something more easier to write drivers for.
- libDRM: Talk to the kernel driver, allowing GPU query capabilities, buffer allocation, and job submission.
He then explained that Mesa3D included a generic toolchain, which provides a glsl compiler that takes shaders as input and outputs NIR (New Intermediate Representation). This allows various optimizations across different drivers.
He showed how the shader gets transformed from GLSL to NIR into GPU assembly. To illustrate this, he used the example of the vertex Shader and fragment Shader used in the red triangle demo, explaining how they are processed by the glsl compiler and then transformed into GPU assembly.
Finally, he explained that a command stream is created to bring the GPU in a well-defined state for the actual draw command. This involves setting up buffers, clip planes, and other necessary components.