Buildroot is a widely used embedded Linux build systems. A large number of companies and projects use Buildroot to produce customized embedded Linux systems for a wide range of embedded devices. Most of those devices are now connected to the Internet, and therefore subject to attacks if the software they run is not regularly updated to address security vulnerabilities.
The Buildroot project publishes a new release every three months, with each release providing a mix of new features, new packages, package updates, build infrastructure improvements… and security fixes. However, until earlier this year, as soon as a new version was published, the maintenance of the previous version stopped. This means that in order to stay up to date in terms of security fixes, users essentially had two options:
Update their Buildroot version regularly. The big drawback is that they get not only security updates, but also many other package updates, which may be problematic when a system is in production.
Stick with their original Buildroot version, and carefully monitor CVEs and security vulnerabilities in the packages they use, and update the corresponding packages, which obvisouly is a time-consuming process.
Starting with 2017.02, the Buildroot community has decided to offer one long term supported release every year: 2017.02 will be supported one year in terms of security updates and bug fixes, until 2018.02 is released. The usual three-month release cycle still applies, with 2017.05 and 2017.08 already being released, but users interested in a stable Buildroot version that is kept updated for security issues can stay on 2017.02.
Since 2017.02 was released on February 28th, 2017, six minor versions were published on a fairly regularly basis, almost every month, except in August:
With about 60 to 130 commits between each minor version, it is relatively easy for users to check what has been changed, and evaluate the impact of upgrading to the latest minor version to benefit from the security updates. The commits integrated in those minor versions are carefully chosen with the idea that users should be able to easily update existing systems.
In total, those six minor versions include 526 commits, of which 183 commits were security updates, representing roughly one third of the total number of commits. The other commits have been:
140 commits to fix build issues
57 commits to bump versions of packages for bug fixes. These almost exclusively include updates to the Linux kernel, using its LTS versions. For other packages, we are more conservative and generally don’t upgrade them.
17 commits to address issues in the licensing description of the packages
186 commits to fix miscellaneous issues, ranging from runtime issues affecting packages to bugs in the build infrastructure
The Buildroot community has already received a number of bug reports, patches or suggestions specifically targetting the 2017.02 LTS version, which indicates that developers and companies have started to adopt this LTS version.
Therefore, if you are interested in using Buildroot for a product, you should probably consider using the LTS version! We very much welcome feedback on this version, and help in monitoring the security vulnerabilities affecting software packages in Buildroot.
As most people know, getting GPU-based 3D acceleration to work on ARM platforms has always been difficult, due to the closed nature of the support for such GPUs. Most vendors provide closed-source binary-only OpenGL implementations in the form of binary blobs, whose quality depend on the vendor.
This situation is getting better and better through vendor-funded initiatives like for the Broadcom VC4 and VC5, or through reverse engineering projects like Nouveau on Tegra SoCs, Etnaviv on Vivante GPUs, Freedreno on Qualcomm’s. However there are still GPUs where you do not have the option to use a free software stack: PowerVR from Imagination Technologies and Mali from ARM (even though there is some progress on the reverse engineering effort).
Allwinner SoCs are using either a Mali GPU from ARM or a PowerVR from Imagination Technologies, and therefore, support for OpenGL on those platforms using a mainline Linux kernel has always been a problem. This is also further complicated by the fact that Allwinner is mostly interested in Android, which uses a different C library that avoids its use in traditional glibc-based systems (or through the use of libhybris).
However, we are happy to announce that Allwinner gave us clearance to publish the userspace binary blobs that allows to get OpenGL supported on Allwinner platforms that use a Mali GPU from ARM, using a recent mainline Linux kernel. Of course, those are closed source binary blobs and not a nice fully open-source solution, but it nonetheless allows everyone to have OpenGL support working, while taking advantage of all the benefits of a recent mainline Linux kernel. We have successfully used those binary blobs on customer projects involving the Allwinner A33 SoCs, and they should work on all Allwinner SoCs using the Mali GPU.
In order to get GPU support to work on your Allwinner platform, you will need:
The kernel-side driver, available on Maxime Ripard’s Github repository. This is essentially the Mali kernel-side driver from ARM, plus a number of build and bug fixes to make it work with recent mainline Linux kernels.
The Device Tree description of the GPU. We introduced Device Tree bindings for Mali GPUs in the mainline kernel a while ago, so that Device Trees can describe such GPUs. Such description has been added for the Allwinner A23 and A33 SoCs as part of this commit.
The userspace blob, which is available on Free Electrons GitHub repository. It currently provides the r6p2 version of the driver, with support for both fbdev and X11 systems. Hopefully, we’ll gain access to newer versions in the future, with additional features (such as GBM support).
If you want to use it in your system, the first step is to have the GPU definition in your device tree if it’s not already there. Then, you need to compile the kernel module:
It should install the mali.ko Linux kernel module into the target filesystem.
Now, you can copy the OpenGL userspace blobs that match your setup, most likely the fbdev or X11-dma-buf variant. For example, for fbdev:
git clone https://github.com/free-electrons/mali-blobs.git
cp -a r6p2/fbdev/lib/lib_fb_dev/lib* $TARGET_DIR/usr/lib
You should be all set. Of course, you will have to link your OpenGL applications or libraries against those user-space blobs. You can check that everything works using OpenGL test programs such as es2_gears for example.
This release gathers 13006 non-merge commits, amongst which 239 were made by Free Electrons engineers. According to the LWN article on 4.13 statistics, this makes Free Electrons the 13th contributing company by number of commits, the 10th by lines changed.
The most important contributions from Free Electrons for this release have been:
In the RTC subsystem
Alexandre Belloni introduced a new method for registering RTC devices, with one step for the allocation, and one step for the registration itself, which allows to solve race conditions in a number of drivers.
Alexandre Belloni added support for exposing the non-volatile memory found in some RTC devices through the Linux kernel nvmem framework, making them usable from userspace. A few drivers were changed to use this new mechanism.
In the MTD/NAND subsystem
Boris Brezillon did a large number of fixes and minor improvements in the NAND subsystem, both in the core and in a few drivers.
Thomas Petazzoni contributed the support for on-die ECC, specifically with Micron NANDs. This allows to use the ECC calculation capabilities of the NAND chip itself, as opposed to using software ECC (calculated by the CPU) or ECC done by the NAND controller.
Thomas Petazzoni contributed a few improvements to the FSMC NAND driver, used on ST Spear platforms. The main improvement is to support the ->setup_data_interface() callback, which allows to configure optimal timings in the NAND controller.
Support for Allwinner ARM platforms
Alexandre Belloni improved the sun4i PWM driver to use the so-called atomic API and support hardware read out.
Antoine Ténart improved the sun4i-ss cryptographic engine driver to support the Allwinner A13 processor, in addition to the already supported A10.
Maxime Ripard contributed HDMI support for the Allwinner A10 processor (in the DRM subsystem) and a number of related changes to the Allwinner clock support.
Quentin Schulz improved the support for battery charging through the AXP20x PMIC, used on Allwinner platforms.
Support for Atmel ARM platforms
Alexandre Belloni added suspend/resume support for the Atmel SAMA5D2 clock driver. This is part of a larger effort to implement the backup mode for the SAMA5D2 processor.
Alexandre Belloni added suspend/resume support in the tcb_clksrc driver, used as for clocksource and clockevents on Atmel SAMA5D2.
Alexandre Belloni cleaned up a number of drivers, removing support for non-DT probing, which is possible now that the AVR32 architecture has been dropped. Indeed, the AVR32 processors used to share the same drivers as the Atmel ARM processors.
Alexandre Belloni added the core support for the backup mode on Atmel SAMA5D2, a suspend/resume state with significant power savings.
Boris Brezillon switched Atmel platforms to use the new binding for the EBI and NAND controllers.
Boris Brezillon added support for timing configuration in the Atmel NAND driver.
Quentin Schulz added suspend/resume support to the Bosch m_can driver, used on Atmel platforms.
Support for Marvell ARM platforms
Antoine Ténart contributed a completely new driver (3200+ lines of code) for the Inside Secure EIP197 cryptographic engine, used in the Marvell Armada 7K and 8K processors. He also subsequently contributed a number of fixes and improvements for this driver.
Antoine Ténart improved the existing mvmdio driver, used to communicate with Ethernet PHYs over MDIO on Marvell platforms to support the XSMI variant found on Marvell Armada 7K/8K, used to communicate with 10G capable PHYs.
Antoine Ténart contributed minimal support for 10G Ethernet in the mvpp2 driver, used on Marvell Armada 7K/8K. For now, the driver still relies on low-level initialization done by the bootloader, but additional changes in 4.14 and 4.15 will remove this limitation.
Grégory Clement added a new pinctrl driver to configure the pin-muxing on the Marvell Armada 37xx processors.
Grégory Clement did a large number of changes to the clock drivers used on the Marvell Armada 7K/8K processors to prepare the addition of pinctrl support.
Grégory Clement added support for Marvell Armada 7K/8K to the existing mvebu-gpio driver.
Thomas Petazzoni added support for the ICU, a specialized interrupt controller used on the Marvell Armada 7K/8K, for all devices located in the CP110 part of the processor.
Thomas Petazzoni removed a work-around to properly resume per-CPU interrupts on the older Marvell Armada 370/XP platforms.
Support for RaspberryPi platforms
Boris Brezillon added runtime PM support to the HDMI encoder driver used on RaspberryPi platforms, and contributed a few other fixes to the VC4 DRM driver.
Free Electrons engineers are not only contributors, but also maintainers of various subsystems in the Linux kernel, which means they are involved in the process of reviewing, discussing and merging patches contributed to those subsystems:
Maxime Ripard, as the Allwinner platform co-maintainer, merged 113 patches from other contributors
Boris Brezillon, as the MTD/NAND maintainer, merged 62 patches from other contributors
Alexandre Belloni, as the RTC maintainer and Atmel platform co-maintainer, merged 57 patches from other contributors
Grégory Clement, as the Marvell EBU co-maintainer, merged 47 patches from other contributors
Here is the commit by commit detail of our contributors to 4.13:
All our bleeding edge toolchains have been updated, with the latest version of the toolchain components:
gcc 7.2.0, which was released 2 days ago
glibc 2.26, which was released 2 weeks ago
Those bleeding edge toolchains are now based on Buildroot 2017.08-rc2, which brings a nice improvement: the host tools (gcc, binutils, etc.) are no longer linked statically against gmp, mpfr and other host libraries. They are dynamically linked against them with an appropriate rpath encoded into the gcc and binutils binaries to find those shared libraries regardless of the installation location of the toolchain.
However, due to gdb 8.0 requiring a C++11 compiler on the host machine (at least gcc 4.8), our bleeding edge toolchains are now built in a Debian Jessie system instead of Debian Squeeze, which means that at least glibc 2.14 is needed on the host system to use them.
The only toolchains for which the tests are not successful are the MIPS64R6 toolchains, due to the Linux kernel not building properly for this architecture with gcc 7.x. This issue has already been reported upstream.
Stable toolchain updates
We haven’t changed the component versions of our stable toolchains, but we made a number of fixes to them:
The armv7m and m68k-coldfire toolchains have been rebuilt with a fixed version of elf2flt that makes the toolchain linker directly usable. This fixes building the Linux kernel using those toolchains.
The mips32r5 toolchain has been rebuilt with NaN 2008 encoding (instead of NaN legacy), which makes the resulting userspace binaries actually executable by the Linux kernel, which expects NaN 2008 encoding on mips32r5 by default.
Most mips toolchains for musl have been rebuilt, with Buildroot fixes for the creation of the dynamic linker symbolic link. This has no effect on the toolchain itself, but also the tests under Qemu to work properly and validate the toolchains.
Each architecture now has a page that lists all toolchain versions available. This allows to easily find a toolchain that matches your requirements (in terms of gcc version, kernel headers version, etc.). See All aarch64 toolchains for an example.
At the end of 2016, the MIPI consortium has finalized the first version of its I3C specification, a new communication bus that aims at replacing older busses like I2C or SPI. According to the specification, I3C gets closer to SPI data rate while requiring less pins and adding interesting mechanisms like in-band interrupts, hotplug capability or automatic discovery of devices connected on the bus. In addition, I3C provides backward compatibility with I2C: I3C and legacy I2C devices can be connected on a common bus controlled by an I3C master.
For more details about I3C, we suggest reading the MIPI I3C Whitepaper, as unfortunately MIPI has not publicly released the specifications for this protocol.
For the last few months, Free Electrons engineer Boris Brezillon has been working with Cadence to develop a Linux kernel subsystem to support this new bus, as well as Cadence’s I3C master controller IP. We have now posted the first version of our patch series to the Linux kernel mailing list for review, and we already received a large number of very useful comments from the kernel community.
Free Electrons is proud to be pioneering the support for this new bus in the Linux kernel, and hopes to see other developers contribute to this subsystem in the near future!
Linus Torvalds has released the 4.12 Linux kernel a week ago, in what is the second biggest kernel release ever by number of commits. As usual, LWN had a very nice coverage of the major new features and improvements: first part, second part and third part.
LWN has also published statistics about the Linux 4.12 development cycles, showing:
Free Electrons as the #14 contributing company by number of commits, with 221 commits, between Broadcom (230 commits) and NXP (212 commits)
Free Electrons as the #14 contributing company number of changed lines, with 16636 lines changed, just two lines less than Mellanox
Free Electrons engineer and MTD NAND maintainer Boris Brezillon as the #17 most active contributor by number of lines changed.
Our most important contributions to this kernel release have been:
On Atmel AT91 and SAMA5 platforms:
Alexandre Belloni has continued to upstream the support for the SAMA5D2 backup mode, which is a very deep suspend to RAM state, offering very nice power savings. Alexandre touched the core code in arch/arm/mach-at91 as well as pinctrl and irqchip drivers
Boris Brezillon has converted the Atmel PWM driver to the atomic API of the PWM subsystem, implemented suspend/resume and did a number of fixes in the Atmel display controller driver, and also removed the no longer used AT91 Parallel ATA driver.
Quentin Schulz improved the suspend/resume hooks in the atmel-spi driver to support the SAMA5D2 backup mode.
On Allwinner platforms:
Mylène Josserand has made a number of improvements to the sun8i-codec audio driver that she contributed a few releases ago.
Maxime Ripard added devfreq support to dynamically change the frequency of the GPU on the Allwinner A33 SoC.
Quentin Schulz added battery charging and ADC support to the X-Powers AXP20x and AXP22x PMICs, found on Allwinner platforms.
Quentin Schulz added a new IIO driver to support the ADCs found on numerous Allwinner SoCs.
Quentin Schulz added support for the Allwinner A33 built-in thermal sensor, and used it to implement thermal throttling on this platform.
On Marvell platforms:
Antoine Ténart contributed Device Tree changes to describe the cryptographic engines found in the Marvell Armada 7K and 8K SoCs. For now only the Device Tree description has been merged, the driver itself will arrive in Linux 4.13.
Grégory Clement has contributed a pinctrl and GPIO driver for the Marvell Armada 3720 SoC (Cortex-A53 based)
Grégory Clement has improved the Device Tree description of the Marvell Armada 3720 and Marvell Armada 7K/8K SoCs and corresponding evaluation boards: SDHCI and RTC are now enabled on Armada 7K/8K, USB2, USB3 and RTC are now enabled on Armada 3720.
Thomas Petazzoni made a significant number of changes to the mvpp2 network driver, finally adding support for the PPv2.2 version of this Ethernet controller. This allowed to enable network support on the Marvell Armada 7K/8K SoCs.
Thomas Petazzoni contributed a number of fixes to the mv_xor_v2dmaengine driver, used for the XOR engines on the Marvell Armada 7K/8K SoCs.
Thomas Petazzoni cleaned-up the MSI support in the Marvell pci-mvebu and pcie-aardvark PCI host controller drivers, which allowed to remove a no-longer used MSI kernel API.
On the ST SPEAr600 platform:
Thomas Petazzoni added support for the ADC available on this platform, by adding its Device Tree description and fixing a clock driver bug
Thomas did a number of small improvements to the Device Tree description of the SoC and its evaluation board
Thomas cleaned up the fsmc_nand driver, which is used for the NAND controller driver on this platform, removing lots of unused code
In the MTD NAND subsystem:
Boris Brezillon implemented a mechanism to allow vendor-specific initialization and detection steps to be added, on a per-NAND chip basis. As part of this effort, he has split into multiple files the vendor-specific initialization sequences for Macronix, AMD/Spansion, Micron, Toshiba, Hynix and Samsung NANDs. This work will allow in the future to more easily exploit the vendor-specific features of different NAND chips.
Maxime Ripard added a display panel driver for the ST7789V LCD controller
In addition, several Free Electrons engineers are also maintainers of various kernel subsystems. During this release cycle, they reviewed and merged a number of patches from kernel contributors:
Maxime Ripard, as the Allwinner co-maintainer, merged 94 patches
Boris Brezillon, as the NAND maintainer and MTD co-maintainer, merged 64 patches
Alexandre Belloni, as the RTC maintainer and Atmel co-maintainer, merged 38 patches
Grégory Clement, as the Marvell EBU co-maintainer, merged 32 patches
The details of all our contributions for this release:
For all embedded Linux developers, cross-compilation toolchains are part of the basic tool set, as they allow to build code for a specific CPU architecture and debug it. Until a few years ago, CodeSourcery was providing a lot of high quality pre-compiled toolchains for a wide range of architectures, but has progressively stopped doing so. Linaro provides some freely available toolchains, but only targetting ARM and AArch64. kernel.org has a set of pre-built toolchains for a wider range of architectures, but they are bare metal toolchains (cannot build Linux userspace programs) and updated infrequently.
This web site provides a large number of cross-compilation toolchains, available for a wide range of architectures, in multiple variants. The toolchains are based on the classical combination of gcc, binutils and gdb, plus a C library. We currently provide a total of 138 toolchains, covering many combinations of:
Architectures: AArch64 (little and big endian), ARC, ARM (little and big endian, ARMv5, ARMv6, ARMv7), Blackfin, m68k (Coldfire and 68k), Microblaze (little and big endian), MIPS32 and MIPS64 (little and big endian, with various instruction set variants), NIOS2, OpenRISC, PowerPC and PowerPC64, SuperH, Sparc and Sparc64, x86 and x86-64, Xtensa
Versions: for each combination, we provide a stable version which uses slightly older but more proven versions of gcc, binutils and gdb, and we provide a bleeding edge version with the latest version of gcc, binutils and gdb.
After being generated, most of the toolchains are tested by building a Linux kernel and a Linux userspace, and booting it under Qemu, which allows to verify that the toolchain is minimally working. We plan on adding more tests to validate the toolchains, and welcome your feedback on this topic. Of course, not all toolchains are tested this way, because some CPU architectures are not emulated by Qemu.
The toolchains are built with Buildroot, but can be used for any purpose: build a Linux kernel or bootloader, as a pre-built toolchain for your favorite embedded Linux build system, etc. The toolchains are available in tarballs, together with licensing information and instructions on how to rebuild the toolchain if needed.
We are very much interested in your feedback about those toolchains, so do not hesitate to report bugs or make suggestions in our issue tracker!
This work was done as part of the internship of Florent Jacquet at Free Electrons.
Since 2006, we have provided a Linux source code cross-referencing online tool as a service to the community. The engine behind this website was LXR, a Perl project almost as old as the kernel itself. For the first few years, we used the then-current 0.9.5 version of LXR, but in early 2009 and for various reasons, we reverted to the older 0.3.1 version (from 1999!). In a nutshell, it was simpler and it scaled better.
Recently, we had the opportunity to spend some time on it, to correct a few bugs and to improve the service. After studying the Perl source code and trying out various cross-referencing engines (among which LXR 2.2 and OpenGrok), we decided to implement our own source code cross-referencing engine in Python.
Why create a new engine?
Our goal was to extend our existing service (support for multiple projects, responsive design, etc.) while keeping it simple and fast. When we tried other cross-referencing engines, we were dissatisfied with their relatively low performance on a large codebase such as Linux. Although we probably could have tweaked the underlying database engine for better performance, we decided it would be simpler to stick to the strategy used in LXR 0.3: get away from the relational database engine and keep plain lists in simple key-value stores.
Another reason that motivated a complete rewrite was that we wanted to provide an up-to-date reference (including the latest revisions) while keeping it immutable, so that external links to the source code wouldn’t get broken in the future. As a direct consequence, we would need to index many different revisions for each project, with potentially a lot of redundant information between them. That’s when we realized we could leverage the data model of Git to deal with this redundancy in an efficient manner, by indexing Git blobs, which are shared between revisions. In order to make sure queries under this strategy would be fast enough, we wrote a proof-of-concept in Python, and thus Elixir was born.
What service does it provide?
First, we tried to minimize disruption to our users by keeping the user interface close to that of our old cross-referencing service. The main improvements are:
We now support multiple projects. For now, we provide reference for Linux, Busybox and U-Boot.
Every tag in each project’s git repository is now automatically indexed.
The design has been modernized and now fits comfortably on smaller screens like tablets.
The URL scheme has been simplified and extended with support for multiple projects. An HTTP redirector has been set up for backward compatibility.
Among other smaller improvements, it is now possible to copy and paste code directly without line numbers getting in the way.
How does it work?
Elixir is made of two Python scripts: “update” and “query”. The first looks for new tags and new blobs inside a Git repository, parses them and appends the new references to identifiers to a record inside the database. The second uses the database and the Git repository to display annotated source code and identifier references.
The parsing itself is done with Ctags, which provides us with identifier definitions. In order to find the references to these identifiers, Elixir then simply checks each lexical token in the source file against the definition database, and if that word is defined, a new reference is added.
Like in LXR 0.3, the database structure is kept very simple so that queries don’t have much work to do at runtime, thus speeding them up. In particular, we store references to a particular identifier as a simple list, which can be loaded and parsed very fast. The main difference with LXR is that our list includes references from every blob in the project, so we need to restrict it first to only the blobs that are part of the current version. This is done at runtime, simply by computing the intersection of this list with the list of blobs inside the current version.
Finally, we kept the user interface code clearly segregated from the engine itself by making these two modules communicate through a Unix command-line interface. This means that you can run queries directly on the command-line without going through the web interface.
Our current focus is on improving multi-project support. In particular, each project has its own quirky way of using Git tags, which needs to be handled individually.
At the user-interface level, we are evaluating the possibility of having auto-completion and/or fuzzy search of identifier names. Also, we are looking for a way to provide direct line-level access to references even in the case of very common identifiers.
On the performance front, we would like to cut the indexation time by switching to a new database back-end that provides efficient appending to large records. Also, we could make source code queries faster by precomputing the references, which would also allow us to eliminate identifier “bleeding” between versions (the case where an identifier shows up as “defined in 0 files” because it is only defined in another version).
If you think of other ways we could improve our service, don’t hesitate to drop us a feature request or a patch!
Bonus: why call it “Elixir”?
In the spur of the moment, it seemed like a nice pun on the name “LXR”. But in retrospect, we wish to apologize to the Elixir language team and the community at large for unnecessary namespace pollution.
Since April 2016, we have our own automated testing infrastructure to validate the Linux kernel on a large number of hardware platforms. We use this infrastructure to contribute to the KernelCI project, which tests every day the Linux kernel. However, the tests being done by KernelCI are really basic: it’s mostly booting a basic Linux system and checking that it reaches a shell prompt.
However, LAVA, the software component at the core of this testing infrastructure, can do a lot more than just basic tests.
The need for custom tests
With some of our engineers being Linux maintainers and given all the platforms we need to maintain for our customers, being able to automatically test specific features beyond a simple boot test was a very interesting goal.
In addition, manually testing a kernel change on a large number of hardware platforms can be really tedious. Being able to quickly send test jobs that will use an image you built on your machine can be a great advantage when you have some new code in development that affects more than one board.
We identified two main use cases for custom tests:
Automatic tests to detect regression, as does KernelCI, but with more advanced tests, including platform specific tests.
Manual tests executed by engineers to validate that the changes they are developing do not break existing features, on all platforms.
An appropriate root filesystem, that contains the various userspace programs needed to execute the tests (benchmarking tools, validation tools, etc.)
A test suite, which contains various scripts executing the tests
A custom test tool that glues together the different components
The custom test tool knows all the hardware platforms available and which tests and kernel configurations apply to which hardware platforms. It identifies the appropriate kernel image, Device Tree, root filesystem image and test suite and submits a job to LAVA for execution. LAVA will download the necessary artifacts and run the job on the appropriate device.
Building custom rootfs
When it comes to test specific drivers, dedicated testing, validation or benchmarking tools are sometimes needed. For example, for storage device testing, bonnie++ can be used, while iperf is nice for networking testing. As the default root filesystem used by KernelCI is really minimalist, we need to build our owns, one for each architecture we want to test.
Buildroot is a simple yet efficient tool to generate root filesystems, it is also used by KernelCI to build their minimalist root filesystems. We chose to use it and made custom configuration files to match our needs.
We ended up with custom rootfs built for ARMv4, ARMv5, ARMv7, and ARMv8, that embed for now Bonnie++, iperf, ping (not the Busybox implementation) and other tiny tools that aren’t included in the default Buildroot configuration.
Our Buildroot fork that includes our custom configurations is available as the buildroot-ci Github project (branch ci).
The custom test tool
The custom test tool is the tool that binds the different elements of the overall architecture together.
One of the main features of the tool is to send jobs. Jobs are text files used by LAVA to know what to do with which device. As they are described in LAVA as YAML files (in the version 2 of the API), it is easy to use templates to generate them based on a single model. Some information is quite static such as the device tree name for a given board or the rootfs version to use, but other details change for every job such as the kernel to use or which test to run.
We made a tool able to get the latest kernel images from KernelCI to quickly send jobs without having a to compile a custom kernel image. If the need is to test a custom image that is built locally, the tool is also able to send files to the LAVA server through SSH, to provide a custom kernel image.
The entry point of the tool is ctt.py, which allows to create new jobs, providing a lot of options to define the various aspects of the job (kernel, Device Tree, root filesystem, test, etc.).
The test suite is a set of shell scripts that perform tests returning 0 or 1 depending on the result. This test suite is included inside the root filesystem by LAVA as part of a preparation step for each job.
We currently have a small set of tests:
boot test, which simply returns 0. Such a test will be successful as soon as the boot succeeds.
crypto test, to do some minimal testing of cryptographic engines
usb test, to test USB functionality using mass storage devices
simple network test, that just validates network connectivity using ping
All those tests only require the target hardware platform itself. However, for more elaborate network tests, we needed to get two devices to interact with each other: the target hardware platform and a reference PC platform. For this, we use the LAVA MultiNode API. It allows to have a test that spans multiple devices, which we use to perform multiple iperf sessions to benchmark the bandwidth. This test has therefore one part running on the target device (network-board) and one part running on the reference PC platform (network-laptop).
Our current test suite is available as the test_suite Github project. It is obviously limited to just a few tests for now, we hope to extend the tests in the near future.
First use case: daily tests
As previously stated, it’s important for us to know about regressions introduced in the upstream kernel. Therefore, we have set up a simple daily cron job that:
Sends custom jobs to all boards to validate the latest mainline Linux kernel and latest linux-nextli>
Aggregates results from the past 24 hours and sends emails to subscribed addresses
Updates a dashboard that displays results in a very simple page
Second use case: manual tests
The custom test tool ctt.py has a simple command line interface. It’s easy for someone to set it up and send custom jobs. For example:
ctt.py -b beaglebone-black -m network
will start the network test on the BeagleBone Black, using the latest mainline Linux kernel built by KernelCI. On the other hand:
will run the mmc test on the Marvell Armada 7040 and Armada 8040 development boards, using the locally built kernel image and Device Tree.
The result of the job is sent over e-mail when the test has completed.
Thanks to this custom test tool, we now have an infrastructure that leverages our existing lab and LAVA instance to execute more advanced tests. Our goal is now to increase the coverage, by adding more tests, and run them on more devices. Of course, we welcome feedback and contributions!
Over the last few releases, a significant number of improvements in terms of QA-related tooling has been done in the Buildroot project. As an embedded Linux build system, Buildroot has a growing number of packages, and maintaining all of those packages is a challenge. Therefore, improving the infrastructure around Buildroot to make sure that packages are in good shape is very important. Below we provide a summary of the different improvements that have been made.
Very much like the Linux kernel has a checkpatch.pl script to help contributors validate their patches, Buildroot now has a check-package script that allows to validate the coding style and check for common errors in Buildroot packages. Contributors are encouraged to use it to avoid common mistakes typically spotted during the review process.
check-package is capable of checking the .mk file, the Config.in, the .hash file of packages as well as the patches that apply to packages.
Contributed by Ricardo Martincoski, and written in Python, this tool will first appear in the next stable release 2017.05, to be published at the end of the month.
Buildroot’s page of package stats has been updated with a new column Warnings that lists the number of check-package issues to be fixed on each package. Not all packages have been fixed yet!
Besides coding style issues, another problem that the Buildroot community faces when accepting contributions of new packages or package updates, is that those contributions have rarely been tested on a large number of toolchain/architecture configurations. To help contributors in this testing, a test-pkg tool has been added.
Provided a Buildroot configuration snippet that enables the package to be tested, test-pkg tool will iterate over a number of toolchain/architecture configurations and make sure the package builds fine for all those configurations. The set of configurations being tested is the one used by Buildroot autobuilder’s infrastructure, which allows us to make sure that a package will not fail to build as soon as it is added into the tree. Contributors are therefore encouraged to use this tool for their package related contributions.
A very primitive version of this tool was originally contributed by Thomas Petazzoni, but it’s finally Yann E. Morin who took over, cleaned up the code, extended it and made it more generic, and contributed the final tool.
Runtime testing infrastructure
The Buildroot autobuilder infrastructure has been running for several years, and tests random configurations of Buildroot packages to make sure they build properly. This infrastructure allows the Buildroot developers to make sure that all combinations of packages build properly on all architectures, and has been a very useful tool to help increase Buildroot quality. However, this infrastructure does not perform any sort of runtime testing.
To address this, a new runtime testing infrastructure has recently been contributed to Buildroot by Thomas Petazzoni. Contrary to the autobuilder infrastructure that tests random configurations, this runtime testing infrastructure tests a well-defined set of configurations, and uses Qemu to make sure that they work properly. Located in support/testing this test infrastructure currently has only 25 test cases, but we plan to extend this over time with more and more tests.
For example, the ISO9660 test makes sure that Buildroot is capable of building an ISO9660 image that boots properly under Qemu. The test suite validates that it works in different configurations: using either Grub, Grub2 or Syslinux as a bootloader, and using a root filesystem entirely contained in an initramfs or inside the ISO9660 filesystem itself.
Besides doing run-time tests of packages, this infrastructure will also allow us to test various core Buildroot functionalities. We also plan to have the tests executed on a regular basis on a CI infrastructure such as Gitlab CI.
DEVELOPERS file and e-mail notification
Back in the 2016.11 release, we added a top-level file called DEVELOPERS, which plays more or less the same role as the Linux kernel’s MAINTAINERS file: associate parts of Buildroot, especially packages, with developers interested in this area. Since Buildroot doesn’t have a concept of per-package maintainers, we decided to simply call the file DEVELOPERS.
Thanks to the DEVELOPERS file, we are able to:
Provide the get-developers tool, which parses patches and returns a list of e-mail addresses to which the patches should be sent, very much like the Linux kernel get_maintainer.pl tool. This allows developers who have contributed a package to be notified when a patch is proposed for the same package, and get the chance to review/test the patch.
Notify the developers of a package of build failures caused by this package in the Buildroot autobuilder infrastructure. So far, all build results of this infrastructure were simply sent every day on the mailing list, which made it unpractical for individual developers to notice which build failures they should look at. Thanks to the DEVELOPERS file, we now send a daily e-mail individually to the developers whose packages are affected by build failures.
Notify the developers of the support for a given CPU architecture of build failures caused on the autobuilders for this architecture, in a manner similar to what is done for packages.
Thanks to this, we have seen developers who were not regularly following the Buildroot mailing list contribute again fixes for build failures caused by their packages, increasing the overall Buildroot quality. The DEVELOPERS file and the get-developers script, as well as the related improvements to the autobuilder infrastructure were contributed by Thomas Petazzoni.
Build testing of defconfigs on Gitlab CI
Buildroot contains a number of example configurations called defconfigs for various hardware platforms, which allow to build a minimal embedded Linux system, known to work on such platforms. At the time of this writing, Buildroot has 146 defconfigs for platforms ranging from popular development boards (Raspberry Pi, BeagleBone, etc.) to evaluation boards from SoC vendors and Qemu machine emulation.
In order to make sure all those defconfigs build properly, we used to have a job running on Travis CI, but we started to face limitations, especially in the maximum allowed build duration. Therefore, Arnout Vandecappelle migrated this on Gitlab CI and things have been running smoothly since then.