Beyond boot testing: custom tests with LAVA

Since April 2016, we have our own automated testing infrastructure to validate the Linux kernel on a large number of hardware platforms. We use this infrastructure to contribute to the KernelCI project, which tests every day the Linux kernel. However, the tests being done by KernelCI are really basic: it’s mostly booting a basic Linux system and checking that it reaches a shell prompt.

However, LAVA, the software component at the core of this testing infrastructure, can do a lot more than just basic tests.

The need for custom tests

With some of our engineers being Linux maintainers and given all the platforms we need to maintain for our customers, being able to automatically test specific features beyond a simple boot test was a very interesting goal.

In addition, manually testing a kernel change on a large number of hardware platforms can be really tedious. Being able to quickly send test jobs that will use an image you built on your machine can be a great advantage when you have some new code in development that affects more than one board.

We identified two main use cases for custom tests:

  • Automatic tests to detect regression, as does KernelCI, but with more advanced tests, including platform specific tests.
  • Manual tests executed by engineers to validate that the changes they are developing do not break existing features, on all platforms.

Overall architecture

Several tools are needed to run custom tests:

  • The LAVA instance, which controls the hardware platforms to be tested. See our previous blog posts on our testing hardware infrastructrure and software architecture
  • An appropriate root filesystem, that contains the various userspace programs needed to execute the tests (benchmarking tools, validation tools, etc.)
  • A test suite, which contains various scripts executing the tests
  • A custom test tool that glues together the different components

The custom test tool knows all the hardware platforms available and which tests and kernel configurations apply to which hardware platforms. It identifies the appropriate kernel image, Device Tree, root filesystem image and test suite and submits a job to LAVA for execution. LAVA will download the necessary artifacts and run the job on the appropriate device.

Building custom rootfs

When it comes to test specific drivers, dedicated testing, validation or benchmarking tools are sometimes needed. For example, for storage device testing, bonnie++ can be used, while iperf is nice for networking testing. As the default root filesystem used by KernelCI is really minimalist, we need to build our owns, one for each architecture we want to test.

Buildroot is a simple yet efficient tool to generate root filesystems, it is also used by KernelCI to build their minimalist root filesystems. We chose to use it and made custom configuration files to match our needs.

We ended up with custom rootfs built for ARMv4, ARMv5, ARMv7, and ARMv8, that embed for now Bonnie++, iperf, ping (not the Busybox implementation) and other tiny tools that aren’t included in the default Buildroot configuration.

Our Buildroot fork that includes our custom configurations is available as the buildroot-ci Github project (branch ci).

The custom test tool

The custom test tool is the tool that binds the different elements of the overall architecture together.

One of the main features of the tool is to send jobs. Jobs are text files used by LAVA to know what to do with which device. As they are described in LAVA as YAML files (in the version 2 of the API), it is easy to use templates to generate them based on a single model. Some information is quite static such as the device tree name for a given board or the rootfs version to use, but other details change for every job such as the kernel to use or which test to run.

We made a tool able to get the latest kernel images from KernelCI to quickly send jobs without having a to compile a custom kernel image. If the need is to test a custom image that is built locally, the tool is also able to send files to the LAVA server through SSH, to provide a custom kernel image.

The entry point of the tool is ctt.py, which allows to create new jobs, providing a lot of options to define the various aspects of the job (kernel, Device Tree, root filesystem, test, etc.).

This tool is written in Python, and lives in the custom_tests_tool Github project.

The test suite

The test suite is a set of shell scripts that perform tests returning 0 or 1 depending on the result. This test suite is included inside the root filesystem by LAVA as part of a preparation step for each job.

We currently have a small set of tests:

  • boot test, which simply returns 0. Such a test will be successful as soon as the boot succeeds.
  • mmc test, to test MMC storage devices
  • sata test, to test SATA storage devices
  • crypto test, to do some minimal testing of cryptographic engines
  • usb test, to test USB functionality using mass storage devices
  • simple network test, that just validates network connectivity using ping

All those tests only require the target hardware platform itself. However, for more elaborate network tests, we needed to get two devices to interact with each other: the target hardware platform and a reference PC platform. For this, we use the LAVA MultiNode API. It allows to have a test that spans multiple devices, which we use to perform multiple iperf sessions to benchmark the bandwidth. This test has therefore one part running on the target device (network-board) and one part running on the reference PC platform (network-laptop).

Our current test suite is available as the test_suite Github project. It is obviously limited to just a few tests for now, we hope to extend the tests in the near future.

First use case: daily tests

As previously stated, it’s important for us to know about regressions introduced in the upstream kernel. Therefore, we have set up a simple daily cron job that:

  • Sends custom jobs to all boards to validate the latest mainline Linux kernel and latest linux-nextli>
  • Aggregates results from the past 24 hours and sends emails to subscribed addresses
  • Updates a dashboard that displays results in a very simple page
A nice dashboard showing the tests of the Beaglebone Black and the Nitrogen6x.

Second use case: manual tests

The custom test tool ctt.py has a simple command line interface. It’s easy for someone to set it up and send custom jobs. For example:

ctt.py -b beaglebone-black -m network

will start the network test on the BeagleBone Black, using the latest mainline Linux kernel built by KernelCI. On the other hand:

ctt.py -b armada-7040-db armada-8040-db -t mmc --kernel arch/arm64/boot/Image --dtb-folder arch/arm64/boot/dts/

will run the mmc test on the Marvell Armada 7040 and Armada 8040 development boards, using the locally built kernel image and Device Tree.

The result of the job is sent over e-mail when the test has completed.

Conclusion

Thanks to this custom test tool, we now have an infrastructure that leverages our existing lab and LAVA instance to execute more advanced tests. Our goal is now to increase the coverage, by adding more tests, and run them on more devices. Of course, we welcome feedback and contributions!

Recent improvements in Buildroot QA

Over the last few releases, a significant number of improvements in terms of QA-related tooling has been done in the Buildroot project. As an embedded Linux build system, Buildroot has a growing number of packages, and maintaining all of those packages is a challenge. Therefore, improving the infrastructure around Buildroot to make sure that packages are in good shape is very important. Below we provide a summary of the different improvements that have been made.

Buildroot

check-package script

Very much like the Linux kernel has a checkpatch.pl script to help contributors validate their patches, Buildroot now has a check-package script that allows to validate the coding style and check for common errors in Buildroot packages. Contributors are encouraged to use it to avoid common mistakes typically spotted during the review process.

check-package is capable of checking the .mk file, the Config.in, the .hash file of packages as well as the patches that apply to packages.

Contributed by Ricardo Martincoski, and written in Python, this tool will first appear in the next stable release 2017.05, to be published at the end of the month.

Buildroot’s page of package stats has been updated with a new column Warnings that lists the number of check-package issues to be fixed on each package. Not all packages have been fixed yet!

test-pkg script

Besides coding style issues, another problem that the Buildroot community faces when accepting contributions of new packages or package updates, is that those contributions have rarely been tested on a large number of toolchain/architecture configurations. To help contributors in this testing, a test-pkg tool has been added.

Provided a Buildroot configuration snippet that enables the package to be tested, test-pkg tool will iterate over a number of toolchain/architecture configurations and make sure the package builds fine for all those configurations. The set of configurations being tested is the one used by Buildroot autobuilder’s infrastructure, which allows us to make sure that a package will not fail to build as soon as it is added into the tree. Contributors are therefore encouraged to use this tool for their package related contributions.

A very primitive version of this tool was originally contributed by Thomas Petazzoni, but it’s finally Yann E. Morin who took over, cleaned up the code, extended it and made it more generic, and contributed the final tool.

Runtime testing infrastructure

The Buildroot autobuilder infrastructure has been running for several years, and tests random configurations of Buildroot packages to make sure they build properly. This infrastructure allows the Buildroot developers to make sure that all combinations of packages build properly on all architectures, and has been a very useful tool to help increase Buildroot quality. However, this infrastructure does not perform any sort of runtime testing.

To address this, a new runtime testing infrastructure has recently been contributed to Buildroot by Thomas Petazzoni. Contrary to the autobuilder infrastructure that tests random configurations, this runtime testing infrastructure tests a well-defined set of configurations, and uses Qemu to make sure that they work properly. Located in support/testing this test infrastructure currently has only 25 test cases, but we plan to extend this over time with more and more tests.

For example, the ISO9660 test makes sure that Buildroot is capable of building an ISO9660 image that boots properly under Qemu. The test suite validates that it works in different configurations: using either Grub, Grub2 or Syslinux as a bootloader, and using a root filesystem entirely contained in an initramfs or inside the ISO9660 filesystem itself.

Besides doing run-time tests of packages, this infrastructure will also allow us to test various core Buildroot functionalities. We also plan to have the tests executed on a regular basis on a CI infrastructure such as Gitlab CI.

DEVELOPERS file and e-mail notification

Back in the 2016.11 release, we added a top-level file called DEVELOPERS, which plays more or less the same role as the Linux kernel’s MAINTAINERS file: associate parts of Buildroot, especially packages, with developers interested in this area. Since Buildroot doesn’t have a concept of per-package maintainers, we decided to simply call the file DEVELOPERS.

Thanks to the DEVELOPERS file, we are able to:

  • Provide the get-developers tool, which parses patches and returns a list of e-mail addresses to which the patches should be sent, very much like the Linux kernel get_maintainer.pl tool. This allows developers who have contributed a package to be notified when a patch is proposed for the same package, and get the chance to review/test the patch.
  • Notify the developers of a package of build failures caused by this package in the Buildroot autobuilder infrastructure. So far, all build results of this infrastructure were simply sent every day on the mailing list, which made it unpractical for individual developers to notice which build failures they should look at. Thanks to the DEVELOPERS file, we now send a daily e-mail individually to the developers whose packages are affected by build failures.
  • Notify the developers of the support for a given CPU architecture of build failures caused on the autobuilders for this architecture, in a manner similar to what is done for packages.

Thanks to this, we have seen developers who were not regularly following the Buildroot mailing list contribute again fixes for build failures caused by their packages, increasing the overall Buildroot quality. The DEVELOPERS file and the get-developers script, as well as the related improvements to the autobuilder infrastructure were contributed by Thomas Petazzoni.

Build testing of defconfigs on Gitlab CI

Buildroot contains a number of example configurations called defconfigs for various hardware platforms, which allow to build a minimal embedded Linux system, known to work on such platforms. At the time of this writing, Buildroot has 146 defconfigs for platforms ranging from popular development boards (Raspberry Pi, BeagleBone, etc.) to evaluation boards from SoC vendors and Qemu machine emulation.

In order to make sure all those defconfigs build properly, we used to have a job running on Travis CI, but we started to face limitations, especially in the maximum allowed build duration. Therefore, Arnout Vandecappelle migrated this on Gitlab CI and things have been running smoothly since then.

Introducing lavabo, board remote control software

In two previous blog posts, we presented the hardware and software architecture of the automated testing platform we have created to test the Linux kernel on a large number of embedded platforms.

The primary use case for this infrastructure was to participate to the KernelCI.org testing effort, which tests the Linux kernel every day on many hardware platforms.

However, since our embedded boards are now fully controlled by LAVA, we wondered if we could not only use our lab for KernelCI.org, but also provide remote control of our boards to Bootlin engineers so that they can access development boards from anywhere. lavabo was born from this idea and its goal is to allow full remote control of the boards as it is done in LAVA: interface with the serial port, control the power supply and provide files to the board using TFTP.

The advantages of being able to access the boards remotely are obvious: allowing engineers working from home to work on their hardware platforms, avoid moving the boards out of the lab and back into the lab each time an engineer wants to do a test, etc.

User’s perspective

From a user’s point of view, lavabo is used through the eponymous command lavabo, which allows to:

  • List the boards and their status
    $ lavabo list
  • Reserve a board for lavabo usage, so that it is no longer used for CI jobs
    $ lavabo reserve am335x-boneblack_01
  • Upload a kernel image and Device Tree blob so that it can be accessed by the board through TFTP
    $ lavabo upload zImage am335x-boneblack.dtb
  • Connect to the serial port of the board
    $ lavabo serial am335x-boneblack_01
  • Reset the power of the board
    $ lavabo reset am335x-boneblack_01
  • Power off the board
    $ lavabo power-off am335x-boneblack_01
  • Release the board, so that it can once again be used for CI jobs
    $ lavabo release am335x-boneblack_01

Overall architecture and implementation

The following diagram summarizes the overall architecture of lavabo (components in green) and how it connects with existing components of the LAVA architecture.

lavabo reuses LAVA tools and configuration files
lavabo reuses LAVA tools and configuration files

A client-server software

lavabo follows the classical client-server model: the lavabo client is installed on the machines of users, while the lavabo server is hosted on the same machine as LAVA. The server-side of lavabo is responsible for calling the right tools directly on the server machine and making the right calls to LAVA’s API. It controls the boards and interacts with the LAVA instance to reserve and release a board.

On the server machine, a specific Unix user is configured, through its .ssh/authorized_keys to automatically spawn the lavabo server program when someone connects. The lavabo client and server interact directly using their stdin/stdout, by exchanging JSON dictionaries. This interaction model has been inspired from the Attic backup program. Therefore, the lavabo server is not a background process that runs permanently like traditional daemons.

Handling serial connection

Exchanging JSON over SSH works fine to allow the lavabo client to provide instructions to the lavabo server, but it doesn’t work well to provide access to the serial ports of the boards. However, ser2net is already used by LAVA and provides a local telnet port for each serial port. lavabo simply uses SSH port-forwarding to redirect those telnet ports to local ports on the user’s machine.

Different ways to connect to the serial
Different ways to connect to the serial

Interaction with LAVA

To use a board outside of LAVA, we have to interact with LAVA to tell him the board cannot be used anymore. We therefore had to work with LAVA developers to add endpoints for putting online (release) and for putting offline (reserve) boards and an endpoint to get the current status of a board (busy, idle or offline) in LAVA’s API.

These additions to the LAVA API are used by the lavabo server to make reserve and release boards, so that there is no conflict between the CI related jobs (such as the ones submitted by KernelCI.org) and the direct use of boards for remote development.

Interaction with the boards

Now that we know how the client and the server interact and also how the server communicates with LAVA, we need a way to know which boards are in the lab, on which port the serial connection of a board is exposed and what are the commands to control the board’s power supply. All this configuration has already been given to LAVA, so lavabo server simply reads the LAVA configuration files.

The last requirement is to provide files to the board, such as kernel images, Device Tree blobs, etc. Indeed, from a network point of view, the boards are located in a different subnet not routed directly to the users machines. LAVA already has a directory accessible through TFTP from the boards which is one of the mechanisms used to serve files to boards. Therefore, the easiest and most obvious way is to send files from the client to the server and move the files to this directory, which we implemented using SFTP.

User authentication

Since the serial port cannot be shared among several sessions, it is essential to guarantee a board can only be used by one engineer at a time. In order to identify users, we have one SSH key per user in the .ssh/authorized_keys file on the server, each associated to a call to the lavabo-server program with a different username.

This allows us to identify who is reserving/releasing the boards, and make sure that serial port access, or requests to power off or reset the boards are done by the user having reserved the board.

For TFTP, the lavabo upload command automatically uploads files into a per-user sub-directory of the TFTP server. Therefore, when a file called zImage is uploaded, the board will access it over TFTP by downloading user/zImage.

Availability and installation

As you could guess from our love for FOSS, lavabo is released under the GNU GPLv2 license in a GitHub repository. Extensive documentation is available if you’re interested in installing lavabo. Of course, patches are welcome!

Eight channels audio on i.MX7 with PCM3168

Toradex Colibri i.MX7Bootlin engineer Alexandre Belloni recently worked on a custom carrier board for a Colibri iMX7 system-on-module from Toradex. This system-on-module obviously uses the i.MX7 ARM processor from Freescale/NXP.

While the module includes an SGTL5000 codec, one of the requirements for that project was to handle up to eight audio channels. The SGTL5000 uses I²S and handles only two channels.

I2S

I2S timing diagram from the SGTL5000 datasheet

Thankfully, the i.MX7 has multiple audio interfaces and one is fully available on the SODIMM connector of the Colibri iMX7. A TI PCM3168 was chosen for the carrier board and is connected to the second Synchronous Audio Interface (SAI2) of the i.MX7. This codec can handle up to 8 output channels and 6 input channels. It can take multiple formats as its input but TDM takes the smaller number of signals (4 signals: bit clock, word clock, data input and data output).


TDM timing diagram from the PCM3168 datasheet

The current Linux long term support version is 4.9 and was chosen for this project. It has support for both the i.MX7 SAI (sound/soc/fsl/fsl_sai.c) and the PCM3168 (sound/soc/codecs/pcm3168a.c). That’s two of the three components that are needed, the last one being the driver linking both by describing the topology of the “sound card”. In order to keep the custom code to the minimum, there is an existing generic driver called simple-card (sound/soc/generic/simple-card.c). It is always worth trying to use it unless something really specific prevents that. Using it was as simple as writing the following DT node:

        board_sound {
                compatible = "simple-audio-card";
                simple-audio-card,name = "imx7-pcm3168";
                simple-audio-card,widgets =
                        "Speaker", "Channel1out",
                        "Speaker", "Channel2out",
                        "Speaker", "Channel3out",
                        "Speaker", "Channel4out",
                        "Microphone", "Channel1in",
                        "Microphone", "Channel2in",
                        "Microphone", "Channel3in",
                        "Microphone", "Channel4in";
                simple-audio-card,routing =
                        "Channel1out", "AOUT1L",
                        "Channel2out", "AOUT1R",
                        "Channel3out", "AOUT2L",
                        "Channel4out", "AOUT2R",
                        "Channel1in", "AIN1L",
                        "Channel2in", "AIN1R",
                        "Channel3in", "AIN2L",
                        "Channel4in", "AIN2R";

                simple-audio-card,dai-link@0 {
                        format = "left_j";
                        bitclock-master = <&pcm3168_dac>;
                        frame-master = <&pcm3168_dac>;
                        frame-inversion;

                        cpu {
                                sound-dai = <&sai2>;
                                dai-tdm-slot-num = <8>;
                                dai-tdm-slot-width = <32>;
                        };

                        pcm3168_dac: codec {
                                sound-dai = <&pcm3168 0>;
                                clocks = <&codec_osc>;
                        };
                };

                simple-audio-card,dai-link@2 {
                        format = "left_j";
                        bitclock-master = <&pcm3168_adc>;
                        frame-master = <&pcm3168_adc>;

                        cpu {
                                sound-dai = <&sai2>;
                                dai-tdm-slot-num = <8>;
                                dai-tdm-slot-width = <32>;
                        };

                        pcm3168_adc: codec {
                                sound-dai = <&pcm3168 1>;
                                clocks = <&codec_osc>;
                        };
                };
        };

There are multiple things of interest:

  • Only 4 input channels and 4 output channels are routed because the carrier board only had that wired.
  • There are two DAI links because the pcm3168 driver exposes inputs and outputs separately
  • As per the PCM3168 datasheet:
    • left justified mode is used
    • dai-tdm-slot-num is set to 8 even though only 4 are actually used
    • dai-tdm-slot-width is set to 32 because the codec takes 24-bit samples but requires 32 clocks per sample (this is solved later in userspace)
    • The codec is master which is usually best regarding clock accuracy, especially since the various SoMs on the market almost never expose the audio clock on the carrier board interface. Here, a crystal was used to clock the PCM3168.

The PCM3168 codec is added under the ecspi3 node as that is where it is connected:

&ecspi3 {
        pcm3168: codec@0 {
                compatible = "ti,pcm3168a";
                reg = <0>;
                spi-max-frequency = <1000000>;
                clocks = <&codec_osc>;
                clock-names = "scki";
                #sound-dai-cells = <1>;
                VDD1-supply = <&reg_module_3v3>;
                VDD2-supply = <&reg_module_3v3>;
                VCCAD1-supply = <&reg_board_5v0>;
                VCCAD2-supply = <&reg_board_5v0>;
                VCCDA1-supply = <&reg_board_5v0>;
                VCCDA2-supply = <&reg_board_5v0>;
        };
};

#sound-dai-cells is what allows to select between the input and output interfaces.

On top of that, multiple issues had to be fixed:

Finally, an ALSA configuration file (/usr/share/alsa/cards/imx7-pcm3168.conf) was written to ensure samples sent to the card are in the proper format, S32_LE. 24-bit samples will simply have zeroes in the least significant byte. For 32-bit samples, the codec will properly ignore the least significant byte.
Also this describes that the first subdevice is the playback (output) device and the second subdevice is the capture (input) device.

imx7-pcm3168.pcm.default {
	@args [ CARD ]
	@args.CARD {
		type string
	}
	type asym
	playback.pcm {
		type plug
		slave {
			pcm {
				type hw
				card $CARD
				device 0
			}
			format S32_LE
			rate 48000
			channels 4
		}
	}
	capture.pcm {
		type plug
		slave {
			pcm {
				type hw
				card $CARD
				device 1
			}
			format S32_LE
			rate 48000
			channels 4
		}
	}
}

On top of that, the dmix and dsnoop ALSA plugins can be used to separate channels.

To conclude, this shows that it is possible to easily leverage existing code to integrate an audio codec in a design by simply writing a device tree snippet and maybe an ALSA configuration file if necessary.

Feedback from the Netdev 2.1 conference

At Bootlin, we regularly work on networking topics as part of our Linux kernel contributions and thus we decided to attend our very first Netdev conference this year in Montreal. With the recent evolution of the network subsystem and its drivers capabilities, the conference was a very good opportunity to stay up-to-date, thanks to lots of interesting sessions.

Eric Dumazet presenting “Busypolling next generation”

The speakers and the Netdev committee did an impressive job by offering such a great schedule and the recorded talks are already available on the Netdev Youtube channel. We particularly liked a few of those talks.

Distributed Switch Architecture – slidesvideo

Andrew Lunn, Viven Didelot and Florian Fainelli presented DSA, the Distributed Switch Architecture, by giving an overview of what DSA is and by then presenting its design. They completed their talk by discussing the future of this subsystem.

DSA in one slide

The goal of the DSA subsystem is to support Ethernet switches connected to the CPU through an Ethernet controller. The distributed part comes from the possibility to have multiple switches connected together through dedicated ports. DSA was introduced nearly 10 years ago but was mostly quiet and only recently came back to life thanks to contributions made by the authors of this talk, its maintainers.

The main idea of DSA is to reuse the available internal representations and tools to describe and configure the switches. Ports are represented as Linux network interfaces to allow the userspace to configure them using common tools, the Linux bridging concept is used for interface bridging and the Linux bonding concept for port trunks. A switch handled by DSA is not seen as a special device with its own control interface but rather as an hardware accelerator for specific networking capabilities.

DSA has its own data plane where the switch ports are slave interfaces and the Ethernet controller connected to the SoC a master one. Tagging protocols are used to direct the frames to a specific port when coming from the SoC, as well as when received by the switch. For example, the RX path has an extra check after netif_receive_skb() so that if DSA is used, the frame can be tagged and reinjected into the network stack RX flow.

Finally, they talked about the relationship between DSA and Switchdev, and cross-chip configuration for interconnected switches. They also exposed the upcoming changes in DSA as well as long term goals.

Memory bottlenecks – slides

As part of the network performances workshop, Jesper Dangaard Brouer presented memory bottlenecks in the allocators caused by specific network workloads, and how to deal with them. The SLAB/SLUB baseline performances are found to be too slow, particularly when using XDP. A way from a driver to solve this issue is to implement a custom page recycling mechanism and that’s what all high-speed drivers do. He then displayed some data to show why this mechanism is needed when targeting the 10G network budget.

Jesper is working on a generic solution called page pool and sent a first RFC at the end of 2016. As mentioned in the cover letter, it’s still not ready for inclusion and was only sent for early reviews. He also made a small overview of his implementation.

DDOS countermeasures with XDP – slides #1slides #2 – video #1video #2

These two talks were given by Gilberto Bertin from Cloudflare and Martin Lau from Facebook. While they were not talking about device driver implementation or improvements in the network stack directly related to what we do at Bootlin, it was nice to see how XDP is used in production.

XDP, the eXpress Data Path, provides a programmable data path at the lowest point of the network stack by processing RX packets directly out of the drivers’ RX ring queues. It’s quite new and is an answer to lots of userspace based solutions such as DPDK. Gilberto andMartin showed excellent results, confirming the usefulness of XDP.

From a driver point of view, some changes are required to support it. RX hooks must be added as well as some API changes and the driver’s memory model often needs to be updated. So far, in v4.10, only a few drivers are supporting XDP.

XDP MythBusters – slides – video

David S. Miller, the maintainer of the Linux networking stack and drivers, did an interesting keynote about XDP and eBPF. The eXpress Data Path clearly was the hot topic of this Netdev 2.1 conference with lots of talks related to the concept and David did a good overview of what XDP is, its purposes, advantages and limitations. He also quickly covered eBPF, the extended Berkeley Packet Filters, which is used in XDP to filter packets.

This presentation was a comprehensive introduction to the concepts introduced by XDP and its different use cases.

Conclusion

Netdev 2.1 was an excellent experience for us. The conference was well organized, the single track format allowed us to see every session on the schedule, and meeting with attendees and speakers was easy. The content was highly technical and an excellent opportunity to stay up-to-date with the latest changes of the networking subsystem in the kernel. The conference hosted both talks about in-kernel topics and their use in userspace, which we think is a very good approach to not focus only on the kernel side but also to be aware of the users needs and their use cases.

Linux 4.11, Bootlin contributions

Linus Torvalds has released this Sunday Linux 4.11. For an overview of the new features provided by this new release, one can read the coverage from LWN: part 1, part 2 and part 3. The KernelNewbies site also has a detailed summary of the new features.

With 137 patches contributed, Bootlin is the 18th contributing company according to the Kernel Patch Statistics. Bootlin engineer Maxime Ripard appears in the list of top contributors by changed lines in the LWN statistics.

Our most important contributions to this release have been:

  • Support for Atmel platforms
    • Alexandre Belloni improved suspend/resume support for the Atmel watchdog driver, I2C controller driver and UART controller driver. This is part of a larger effort to upstream support for the backup mode of the Atmel SAMA5D2 SoC.
    • Alexandre Belloni also improved the at91-poweroff driver to properly shutdown LPDDR memories.
    • Boris Brezillon contributed a fix for the Atmel HLCDC display controller driver, as well as fixes for the atmel-ebi driver.
  • Support for Allwinner platforms
    • Boris Brezillon contributed a number of improvements to the sunxi-nand driver.
    • Mylène Josserand contributed a new driver for the digital audio codec on the Allwinner sun8i SoC, as well a the corresponding Device Tree changes and related fixes. Thanks to this driver, Mylène enabled audio support on the R16 Parrot and A33 Sinlinx boards.
    • Maxime Ripard contributed numerous improvements to the sunxi-mmc MMC controller driver, to support higher data rates, especially for the Allwinner A64.
    • Maxime Ripard contributed official Device Tree bindings for the ARM Mali GPU, which allows the GPU to be described in the Device Tree of the upstream kernel, even if the ARM kernel driver for the Mali will never be merged upstream.
    • Maxime Ripard contributed a number of fixes for the rtc-sun6i driver.
    • Maxime Ripard enabled display support on the A33 Sinlinx board, by contributing a panel driver and the necessary Device Tree changes.
    • Maxime Ripard continued his clean-up effort, by converting the GR8 and sun5i clock drivers to the sunxi-ng clock infrastructure, and converting the sun5i pinctrl driver to the new model.
    • Quentin Schulz added a power supply driver for the AXP20X and AXP22X PMICs used on numerous Allwinner platforms, as well as numerous Device Tree changes to enable it on the R16 Parrot and A33 Sinlinx boards.
  • Support for Marvell platforms
    • Grégory Clement added support for the RTC found in the Marvell Armada 7K and 8K SoCs.
    • Grégory Clement added support for the Marvell 88E6141 and 88E6341 Ethernet switches, which are used in the Armada 3700 based EspressoBin development board.
    • Romain Perier enabled the I2C controller, SPI controller and Ethernet switch on the EspressoBin, by contributing Device Tree changes.
    • Thomas Petazzoni contributed a number of fixes to the OMAP hwrng driver, which turns out to also be used on the Marvell 7K/8K platforms for their HW random number generator.
    • Thomas Petazzoni contributed a number of patches for the mvpp2 Ethernet controller driver, preparing the future addition of PPv2.2 support to the driver. The mvpp2 driver currently only supports PPv2.1, the Ethernet controller used on the Marvell Armada 375, and we are working on extending it to support PPv2.2, the Ethernet controller used on the Marvell Armada 7K/8K. PPv2.2 support is scheduled to be merged in 4.12.
  • Support for RaspberryPi platforms
    • Boris Brezillon contributed Device Tree changes to enable the VEC (Video Encoder) on all bcm283x platforms. Boris had previously contributed the driver for the VEC.

In addition to our direct contributions, a number of Bootlin engineers are also maintainers of various subsystems in the Linux kernel. As part of this maintenance role:

  • Maxime Ripard, co-maintainer of the Allwinner ARM platform, reviewed and merged 85 patches from contributors
  • Alexandre Belloni, maintainer of the RTC subsystem and co-maintainer of the Atmel ARM platform, reviewed and merged 60 patches from contributors
  • Grégory Clement, co-maintainer of the Marvell ARM platform, reviewed and merged 42 patches from contributors
  • Boris Brezillon, maintainer of the MTD NAND subsystem, reviewed and merged 8 patches from contributors

Here is the detailed list of contributions, commit per commit:

Buildroot 2017.02 released, Bootlin contributions

Buildroot LogoThe 2017.02 version of Buildroot has been released recently, and as usual Bootlin has been a significant contributor to this release. A total of 1369 commits have gone into this release, contributed by 110 different developers.

Before looking in more details at the contributions from Bootlin, let’s have a look at the main improvements provided by this release:

  • The big announcement is that 2017.02 is going to be a long term support release, maintained with security and other important fixes for one year. This will allow companies, users and projects that cannot upgrade at each Buildroot release to have a stable Buildroot version to work with, coming with regular updates for security and bug fixes. A few fixes have already been collected in the 2017.02.x branch, and regular point releases will be published.
  • Several improvements have been made to support reproducible builds, i.e the capability of having two builds of the same configuration provide the exact same bit-to-bit output. These are not enough to provide reproducible builds yet, but they are a piece of the puzzle, and more patches are pending for the next releases to move forward on this topic.
  • A package infrastructure for packages using the waf build system has been added. Seven packages in Buildroot are using this infrastructure currently.
  • Support for the OpenRISC architecture has been added, as well as improvements to the support of ARM64 (selection of ARM64 cores, possibility of building an ARM 32-bit system optimized for an ARM64 core).
  • The external toolchain infrastructure, which was all implemented in a single very complicated package, has been split into one package per supported toolchain and a common infrastructure. This makes it much easier to maintain.
  • A number of updates has been made to the toolchain components and capabilities: uClibc-ng bumped to 1.0.22 and enabled for ARM64, mips32r6 and mips64r6, gdb 7.12.1 added and switched to gdb 7.11 as the default, Linaro toolchains updated to 2016.11, ARC toolchain components updated to arc-2016.09, MIPS Codescape toolchains bumped to 2016.05-06, CodeSourcery AMD64 and NIOS2 toolchains bumped.
  • Eight new defconfigs for various hardware platforms have been added, including defconfigs for the NIOSII and OpenRISC Qemu emulation.
  • Sixty new packages have been added, and countless other packages have been updated or fixed.
Buildroot developers at work during the Buildroot Developers meeting in February 2017, after the FOSDEM conference in Brussels.

More specifically, the contributions from Bootlin have been:

  • Thomas Petazzoni has handled the release of the first release candidate, 2017.02-rc1, and merged 742 patches out of the 1369 commits merged in this release.
  • Thomas contributed the initial work for the external toolchain infrastructure rework, which has been taken over by Romain Naour and finally merged thanks to Romain’s work.
  • Thomas contributed the rework of the ARM64 architecture description, to allow building an ARM 32-bit system optimized for a 64-bit core, and to allow selecting specific ARM64 cores.
  • Thomas contributed the raspberrypi-usbboot package, which packages a host tool that allows to boot a RaspberryPi system over USB.
  • Thomas fixed a large number of build issues found by the project autobuilders, contributing 41 patches to this effect.
  • Mylène Josserand contributed a patch to the X.org server package, fixing an issue with the i.MX6 OpenGL acceleration.
  • Gustavo Zacarias contributed a few fixes on various packages.

In addition, Bootlin sponsored the participation of Thomas to the Buildroot Developers meeting that took place after the FOSDEM conference in Brussels, early February. A report of this meeting is available on the eLinux Wiki.

The details of Bootlin contributions:

Linux 4.10, Bootlin contributions

After 8 release candidates, Linus Torvalds released the final 4.10 Linux kernel last Sunday. A total of 13029 commits were made between 4.9 and 4.10. As usual, LWN had a very nice coverage of the major new features added during the 4.10 merge window: part 1, part 2 and part 3. The KernelNewbies Wiki has an updated page about 4.10 as well.

On the total of 13029 commits, 116 were made by Bootlin engineers, which interestingly is exactly the same number of commits we made for the 4.9 kernel release!

Our main contributions for this release have been:

  • For Atmel platforms, Alexandre Belloni added support for the securam block of the SAMA5D2, which is needed to implement backup mode, a deep suspend-to-RAM state for which we will be pushing patches over the next kernel releases. Alexandre also fixed some bugs in the Atmel dmaengine and USB gadget drivers.
  • For Allwinner platforms
    • Antoine Ténart enabled the 1-wire controller on the CHIP platform
    • Boris Brezillon fixed an issue in the NAND controller driver, that prevented from using ECC chunks of 512 bytes.
    • Maxime Ripard added support for the CHIP Pro platform from NextThing, together with many addition of features to the underlying SoC, the GR8 from Nextthing.
    • Maxime Ripard implemented audio capture support in the sun4i-i2s driver, bringing capture support to Allwinner A10 platforms.
    • Maxime Ripard added clock support for the Allwinner A64 to the sunxi-ng clock subsystem, and implemented numerous improvements for this subsystem.
    • Maxime Ripard reworked the pin-muxing driver on Allwinner platforms to use a new generic Device Tree binding, and deprecated the old platform-specific Device Tree binding.
    • Quentin Schulz added a MFD driver for the Allwinner A10/A13/A31 hardware block that provides ADC, touchscreen and thermal sensor functionality.
  • For the RaspberryPi platform
    • Boris Brezillon added support for the Video Encoder IP, which provides composite output. See also our recent blog post about our RaspberryPi work.
    • Boris Brezillon made a number of improvements to clock support on the RaspberryPi, which were needed for the Video Encoder IP support.
  • For the Marvell ARM platform
    • Grégory Clement enabled networking support on the Marvell Armada 3700 SoC, a Cortex-A53 based processor.
    • Grégory Clement did a large number of cleanups in the Device Tree files of Marvell platforms, fixing DTC warnings, and using node labels where possible.
    • Romain Perier contributed a brand new driver for the SPI controller of the Marvell Armada 3700, and therefore enabled SPI support on this platform.
    • Romain Perier extended the existing i2c-pxa driver to support the Marvell Armada 3700 I2C controller, and enabled I2C support on this platform.
    • Romain Perier extended the existing hardware number generator driver for OMAP to also be usable for SafeXcel EIP76 from Inside Secure. This allows to use this driver on the Marvell Armada 7K/8K SoC.
    • Romain Perier contributed support for the Globalscale EspressoBin board, a low-cost development board based on the Marvell Armada 3700.
    • Romain Perier did a number of fixes to the CESA driver, used for the cryptographic engine found on 32-bit Marvell SoCs, such as Armada 370, XP or 38x.
    • Thomas Petazzoni fixed a bug in the mvpp2 network driver, currently only used on Marvell Armada 375, but in the process of being extended to be used on Marvell Armada 7K/8K as well.
  • As the maintainer of the MTD NAND subsystem, Boris Brezillon did a few cleanups in the Tango NAND controller driver, added support for the TC58NVG2S0H NAND chip, and improved the core NAND support to accommodate controllers that have some special timing requirements.
  • As the maintainer of the RTC subsystem, Alexandre Belloni did a number of small cleanups and improvements, especially to the jz4740

Here is the detailed list of our commits to the 4.10 release:

Bootlin and Raspberry Pi Linux kernel upstreaming

Raspberry Pi logoFor a few months, Bootlin has been helping the Raspberry Pi Foundation upstream to the Linux kernel a number of display related features for the Raspberry Pi platform.

The main goal behind this upstreaming process is to get rid of the closed-source firmware that is used on non-upstream kernels every time you need to enable/access a specific hardware feature, and replace it by something that is both open-source and compliant with upstream Linux standards.

Eric Anholt has been working hard to upstream display related features. His biggest contribution has certainly been the open-source driver for the VC4 GPU, but he also worked on the display controller side, and we were contracted to help him with this task.

Our first objective was to add support for SDTV (composite) output, which appeared to be much easier than we imagined. As some of you might already know, the display controller of the Raspberry Pi already has a driver in the DRM subsystem. Our job was to add support for the SDTV encoder (also called VEC, for Video EnCoder). The driver has been submitted just before the 4.10 merge window and surprisingly made it into 4.10 (see also the patches). Eric Anholt explained on his blog:

The Raspberry Pi Foundation recently started contracting with Bootlin to give me some support on the display side of the stack. Last week I got to review and release their first big piece of work: Boris Brezillon’s code for SDTV support. I had suggested that we use this as the first project because it should have been small and self contained. It ended up that we had some clock bugs Boris had to fix, and a bug in my core VC4 CRTC code, but he got a working patch series together shockingly quickly. He did one respin for a couple more fixes once I had tested it, and it’s now out on the list waiting for devicetree maintainer review. If nothing goes wrong, we should have composite out support in 4.11 (we’re probably a week late for 4.10).

Our second objective was to help Eric with HDMI audio support. The code has been submitted on the mailing list 2 weeks ago and will hopefully be queued for 4.12. This time on, we didn’t write much code, since Eric already did the bulk of the work. What we did though is debugging the implementation to make it work. Eric also explained on his blog:

Probably the biggest news of the last two weeks is that Boris’s native HDMI audio driver is now on the mailing list for review. I’m hoping that we can get this merged for 4.12 (4.10 is about to be released, so we’re too late for 4.11). We’ve tested stereo audio so far, no compresesd audio (though I think it should Just Work), and >2 channel audio should be relatively small amounts of work from here. The next step on HDMI audio is to write the alsalib configuration snippets necessary to hide the weird details of HDMI audio (stereo IEC958 frames required) so that sound playback works normally for all existing userspace, which Boris should have a bit of time to work on still.

On our side, it has been a great experience to work on such topics with Eric, and you should expect more contributions from Bootlin for the Raspberry Pi platform in the next months, so stay tuned!

Power measurement with BayLibre’s ACME cape

When working on optimizing the power consumption of a board we need a way to measure its consumption. We recently bought an ACME from BayLibre to do that.

Overview of the ACME

The ACME is an extension board for the BeagleBone Black, providing multi-channel power and temperature measurements capabilities. The cape itself has eight probe connectors allowing to do multi-channel measurements. Probes for USB, Jack or HE10 can be bought separately depending on boards you want to monitor.

acme

Last but not least, the ACME is fully open source, from the hardware to the software.

First setup

Ready to use pre-built images are available and can be flashed on an SD card. There are two different images: one acting as a standalone device and one providing an IIO capture daemon. While the later can be used in automated farms, we chose the standalone image which provides user-space tools to control the probes and is more suited to power consumption development topics.

The standalone image userspace can also be built manually using Buildroot, a provided custom configuration and custom init scripts. The kernel should be built using a custom configuration and the device tree needs to be patched.

Using the ACME

To control the probes and get measured values the Sigrok software is used. There is currently no support to send data over the network. Because of this limitation we need to access the BeagleBone Black shell through SSH and run our commands there.

We can display information about the detected probe, by running:

# sigrok-cli --show --driver=baylibre-acme
Driver functions:
    Continuous sampling
    Sample limit
    Time limit
    Sample rate
baylibre-acme - BayLibre ACME with 3 channels: P1_ENRG_PWR P1_ENRG_CURR P1_ENRG_VOL
Channel groups:
    Probe_1: channels P1_ENRG_PWR P1_ENRG_CURR P1_ENRG_VOL
Supported configuration options across all channel groups:
    continuous: 
    limit_samples: 0 (current)
    limit_time: 0 (current)
    samplerate (1 Hz - 500 Hz in steps of 1 Hz)

The driver has four parameters (continuous sampling, sample limit, time limit and sample rate) and has one probe attached with three channels (PWR, CURR and VOL). The acquisition parameters help configuring data acquisition by giving sampling limits or rates. The rates are given in Hertz, and should be within the 1 and 500Hz range when using an ACME.

For example, to sample at 20Hz and display the power consumption measured by our probe P1:

# sigrok-cli --driver=baylibre-acme --channels=P1_ENRG_PWR \
      --continuous --config samplerate=20
FRAME-BEGIN
P1_ENRG_PWR: 1.000000 W
FRAME-END
FRAME-BEGIN
P1_ENRG_PWR: 1.210000 W
FRAME-END
FRAME-BEGIN
P1_ENRG_PWR: 1.210000 W
FRAME-END

Of course there are many more options as shown in the Sigrok CLI manual.

Beta image

A new image is being developed and will change the way to use the ACME. As it’s already available in beta we tested it (and didn’t come back to the stable image). This new version aims to only use IIO to provide the probes data, instead of having a custom Sigrok driver. The main advantage is many software are IIO aware, or will be, as it’s the standard way to use this kind of sensors with the Linux kernel. Last but not least, IIO provides ways to communicate over the network.

A new webpage is available to find information on how to use the beta image, on https://baylibre-acme.github.io. This image isn’t compatible with the current stable one, which we previously described.

The first nice thing to notice when using the beta image is the Bonjour support which helps us communicating with the board in an effortless way:

$ ping baylibre-acme.local

A new tool, acme-cli, is provided to control the probes to switch them on or off given the needs. To switch on or off the first probe:

$ ./acme-cli switch_on 1
$ ./acme-cli switch_off 1

We do not need any additional custom software to use the board, as the sensors data is available using the IIO interface. This means we should be able to use any IIO aware tool to gather the power consumption values:

  • Sigrok, on the laptop/machine this time as IIO is able to communicate over the network;
  • libiio/examples, which provides the iio-monitor tool;
  • iio-capture, which is a fork of iio-readdev designed by BayLibre for an integration into LAVA (automated tests);
  • and many more..

Conclusion

We didn’t use all the possibilities offered by the ACME cape yet but so far it helped us a lot when working on power consumption related topics. The ACME cape is simple to use and comes with a working pre-built image. The beta image offers the IIO support which improved the usability of the device, and even though it’s in a beta version we would recommend to use it.