Configuring ALSA controls from an application

ALSA logoA common task when handling audio on Linux is the need to modify the configuration of the sound card, for example, adjusting the output volume or selecting the capture channels. On an embedded system, it can be enough to simply set the controls once using alsamixer or amixer and then save the configuration with alsactl store. This saves the driver state to the configuration file which, by default, is /var/lib/alsa/asound.state. Once done, this file can be included in the build system and shipped with the root filesystem. Usual distributions already include a script that will invoke alsactl at boot time to restore the settings. If it is not the case, then it is simply a matter of calling alsactl restore.

However, defining a static configuration may not be enough. For example, some codecs have advanced routing features allowing to route the audio channels to different outputs and the application may want to decide at runtime where the audio is going.

Instead of invoking amixer using system(3), even if it is not straightforward, it is possible to directly use the alsa-lib API to set controls.

Let’s start with some required includes:

#include <stdio.h>
#include <alsa/asoundlib.h>

alsa/asoundlib.h is the header that is of interest here as it is where the ALSA API lies. Then we define an id lookup function, which is actually the tricky part. Each control has a unique identifier and to be able to manipulate controls, it is necessary to find this unique identifier. In our sample application, we will be using the control name to do the lookup.

int lookup_id(snd_ctl_elem_id_t *id, snd_ctl_t *handle)
{
	int err;
	snd_ctl_elem_info_t *info;
	snd_ctl_elem_info_alloca(&info);

	snd_ctl_elem_info_set_id(info, id);
	if ((err = snd_ctl_elem_info(handle, info)) < 0) {
		fprintf(stderr, "Cannot find the given element from card\n");
		return err;
	}
	snd_ctl_elem_info_get_id(info, id);

	return 0;
}

This function allocates a snd_ctl_elem_info_t, sets its current id to the one passed as the first argument. At this point, the id only includes the control interface type and its name but not its unique id. The snd_ctl_elem_info() function looks up for the element on the sound card whose handle has been passed as the second argument. Then snd_ctl_elem_info_get_id() updates the id with the now completely filled id.

Then the controls can be modified as follows:

int main(int argc, char *argv[])
{
	int err;
	snd_ctl_t *handle;
	snd_ctl_elem_id_t *id;
	snd_ctl_elem_value_t *value;
	snd_ctl_elem_id_alloca(&id);
	snd_ctl_elem_value_alloca(&value);

This declares and allocates the necessary variables. Allocations are done using alloca so it is not necessary to free them as long as the function exits at some point.

	if ((err = snd_ctl_open(&handle, "hw:0", 0)) < 0) {
		fprintf(stderr, "Card open error: %s\n", snd_strerror(err));
		return err;
	}

Get a handle on the sound card, in this case, hw:0 which is the first sound card in the system.

	snd_ctl_elem_id_set_interface(id, SND_CTL_ELEM_IFACE_MIXER);
	snd_ctl_elem_id_set_name(id, "Headphone Playback Volume");
	if (err = lookup_id(id, handle))
		return err;

This sets the interface type and name of the control we want to modify and then call the lookup function.

	snd_ctl_elem_value_set_id(value, id);
	snd_ctl_elem_value_set_integer(value, 0, 55);
	snd_ctl_elem_value_set_integer(value, 1, 77);

	if ((err = snd_ctl_elem_write(handle, value)) < 0) {
		fprintf(stderr, "Control element write error: %s\n",
			snd_strerror(err));
		return err;
	}

Now, this changes the value of the control. snd_ctl_elem_value_set_id() sets the id of the control to be changed then snd_ctl_elem_value_set_integer() sets the actual value. There are multiple calls because this control has multiple members (in this case, left and right channels). Finally, snd_ctl_elem_write() commits the value.

Note that snd_ctl_elem_value_set_integer() is called directly because we know this control is an integer but it is actually possible to query what kind of value should be used using snd_ctl_elem_info_get_type() on the snd_ctl_elem_info_t. The scale of the integer is also device specific and can be retrieved with the snd_ctl_elem_info_get_min(), snd_ctl_elem_info_get_max() and snd_ctl_elem_info_get_step() helpers.

	snd_ctl_elem_id_clear(id);
	snd_ctl_elem_id_set_interface(id, SND_CTL_ELEM_IFACE_MIXER);
	snd_ctl_elem_id_set_name(id, "Headphone Playback Switch");
	if (err = lookup_id(id, handle))
		return err;

	snd_ctl_elem_value_clear(value);
	snd_ctl_elem_value_set_id(value, id);
	snd_ctl_elem_value_set_boolean(value, 1, 1);

	if ((err = snd_ctl_elem_write(handle, value)) < 0) {
		fprintf(stderr, "Control element write error: %s\n",
			snd_strerror(err));
		return err;
	}

This unmutes the right channel of Headphone playback, this time it is a boolean. The other common kind of element is SND_CTL_ELEM_TYPE_ENUMERATED for enumerated contents. This is used for channel muxing or selecting de-emphasis values for example. snd_ctl_elem_value_set_enumerated() has to be used to set the selected item.

	return 0;
}

This concludes this simple example and should be enough to get you started writing smarter applications that don't rely on external program to configure the sound card controls.

Bootlin at the Embedded Linux Conference 2020

Bootlin has been a participant at the Embedded Linux Conference for many years, and despite the special conditions this year, we will again be participating to this online event, from June 29 to July 1.

More specifically:

  • Bootlin engineer Alexandre Belloni and Bootlin’s audio expert will give a talk ASoC: supporting Audio on an Embedded Board, which presents how audio complex in embedded devices are typically supported by the Linux kernel ALSA System-on-Chip framework. This talk takes place on June 29 at 2:05 PM UTC-5.
  • Bootlin engineer and CTO Thomas Petazzoni will give his usual Buildroot: what’s new ? talk, giving an update on the latest developments and improvements of the Buildroot project. This talk takes place on July 1 at 11:15 AM UTC-5.
  • The vast majority of the Bootlin engineering team will be attending many of talks proposed during the event. Bootlin has been offering to all its engineers the participation to two conferences a year: with the Embedded Linux Conference going virtual, we’ve simply allowed all our engineers to participate, with no restriction. This is part of Bootlin’s policy to ensure our engineers stay as up to date as possible with embedded Linux technologies.
  • Bootlin CTO Thomas Petazzoni was once again part of the program committee for this edition of the Embedded Linux Conference, as part of this committee he reviewed and selected the different talks that were submitted.

We are interested in seeing how this virtual version of the Embedded Linux Conference will compare to the traditional physical event. For many old timers to these conferences, the most useful part of a conference is the hallway track and all the side discussions, meetings and dinners with members of the embedded Linux community, and a virtual version makes such interactions more challenging.

In any case, we hope you’ll enjoy the conference! Don’t hesitate to join us in the Q&A session after our talks, or on the 2-track-embedded-linux room of the Slack workspace set up for the event by the Linux Foundation.

New feature highlights in Elixir Cross Referencer v2.0 and v2.1

The 2.1 release of the Elixir Cross Referencer is now live on https://elixir.bootlin.com/.

Development of new features has accelerated in the recent months, thanks to the contributions from Tamir Carmeli (Github), Chris White (Github) and Maxime Chrétien (Github), who was hired at Bootlin as an intern. I am going to describe the most important new features from such contributors, but the three of them actually made many smaller contributions to many aspects of Elixir.

So, here are the important new features you can now find in Elixir…

Support for symbol documentation

Thanks to Chris White, when you search for a function, you can now see where it is documented, at least when it is done in the Linux kernel way, extracting documentation from comments in the sources.

Symbol documentation in Elixir

This way, when documentation is available, you can immediately know the meaning and expected values of the parameters of a given function and its return value.

Support for Kconfig symbols

Maxime Chrétien has extended Elixir to support kernel configuration parameters. Actually, he contributed a new parser to the universal-ctags project to do so. This way, you can explore C sources and Kconfig files and find the declarations and uses of kernel parameters:

Elixir Kconfig symbols

Now, every time we mention a kernel configuration parameter in our free training materials, we can provide an Elixir link to them. Here is an example for CONFIG_SQUASHFS. Don’t hesitate to use such links in your documents and e-mails about the Linux kernel!

Note that you also have Kconfig symbol links in defconfig files, allowing to understand non-default kernel configuration settings for a given SoC family or board. See this example.

Support for Device Tree aliases

Maxime Chrétien also extended Elixir to support Device Tree labels. This way, when you explore a Device Tree source file and see a reference (phandle) to such a label, you can easily find where it’s defined and what the default properties of the corresponding node are.

Elixir Device Tree Source symbols

Following such extensions to Elixir to support new scopes for symbols, we extended the interface to allow to make searches for symbols either in specific contexts (C, Kconfig or DT), or in all contexts. In most of the cases, a single context will suffice, but we’re anyway offering a mode to perform searches in all contexts at the same time:

Elixir support for multiple symbol contexts

Support for Device Tree compatible strings

v2.1 of the Elixir Cross Referencer also adds support for Device Tree compatible strings, also contributed by Maxime Chrétien. When browsing Device Tree files, you can instantly find which drivers drivers can be bound to the corresponding devices, which properties such drivers require from such devices (as specified in the Device Tree bindings), and other Device Tree files using the same compatible string.

Elixir device tree compatible links
Elixir device tree compatible string search results

Symbol auto-completion in the search dialog

Elixir Cross Referencer v2.1 also features symbol search autocompletion, another capability implemented by Maxime Chrétien. This makes it easy to find Linux kernel function names while programming!

Elixir symbol autocompletion featur

Pygments support for Device Tree source files

In addition to this improvement for Device Tree indexing, Maxime has also contributed a new lexer to the Pygments project, which is used by Elixir for HTML syntax highlighting for all types of files.

REST API

Thanks to Tamir Carmeli, it’s now possible to access the Elixir database through a new REST API, instead of going through its web interface. This way, you can make Elixir queries from data processing scripts, for example.

Testing infrastructure

Chris White has implemented an extensive testing infrastructure to quickly detect regressions before the corresponding changes are applied to production servers. Tamir Carmeli also contributed a test system for the REST API.Thanks to this, each new commit is tested on Travis CI.

Parallel build for the Elixir database

Maxime Chrétien has managed to multithread indexing work. While Maxime is still exploring further options, this has already allowed to divide indexing time by an approximate factor of two.

Limitations

The main limitation of the Elixir Cross Referencer is that it doesn’t try to match any context. For example, the actual implementation of a symbol may depend on the value of a configuration option. When browsing a source file, Elixir also always links to all possibilities for each symbol (there can be multiple unrelated instances of the same symbol across the kernel sources) instead of narrowing the search to the definition corresponding to the currently browsed file. Elixir leave it up to the human user to find out which result matches the context of origin.

This is particularly true for Device Tree symbols that have unrelated occurrences everywhere in the source tree, such as i2c0. In a distant future, we may be able to restrict the search to the context of an originating file.

Contribute

If you have new ideas for extending the Elixir Cross Referencer to support more features and use cases, please share them on the project’s bug tracker. If they are feasible without compromising the relative simplicity and scalability of our engine, we will be happy to implement them!

Practical usage of timer counters in Linux, illustrated on Microchip platforms

Virtually all micro-controllers and micro-processors provide some form of timer counters. In the context of Linux, they are always used for kernel timers, but they can also sometimes be used for PWMs, or input capture devices able to measure external signals such as rotary encoders. In this blog post, we would like to illustrate how Linux can take advantage of such timer counters, by taking the example of the Microchip Timer Counter Block, and depict how its various features fit into existing Linux kernel subsystems.

Hardware overview

On Microchip ARM processors, the TCB (Timer Counter Block) module is a set of three independent, 16 or 32-bits, channels as illustrated in this simplified block diagram:

Microchip TCB

The exact number of TCB modules depends on which Microchip processor you’re using, this Microchip brochure gives the details. Most products have 6 or 9 timer counter channels available, which are grouped into two or three TCB modules, each having 3 channels.

Each TC channel can independently select a clock source for its counter:

  • Internal Clock: sourced from either the system bus clock (often the highest rated one with pre-defined divisors), the slow clock (crystal oscillator) and for the Microchip SAMA5D2 and SAM9X60 SOC series there is even a programmable generic clock source (GCLK) specific to each peripheral.
  • External Clock: based on the three available external input pins: TCLK0, TCLK1 or TCLK2.

The clock source choice should obviously be made depending on the accuracy required by the application.

The module has many functions declined in three different modes:

  • The input capture mode is useful to capture input signals (e.g measure a signal period) through one of the six input pins (TIOAx/TIOBx) connected to each TC module. Each pin can act as trigger source for the counter and two latch register RA/RB can be loaded and compared with a third RC register. This mode is highly configurable with lots of feature to fine tune the capture (subsambling, clock inverting, interrupt, etc.).
  • The waveform mode which provide the core function of TCs as all channels could be used as three independent free-running counters and it is also a mode used to generate PWM signals which gives an extra pool of PWMs
  • The quadrature mode is only supported on the first TC module TCB0 and two (or three) channels are required, channel 0 will decode the speed or position on TIOA0/TIOB0, channel 1 (with TIOB1 input) can be configured to store the revolution or number of rotation. Finally if speed measurement is configured the channel 2 shall define a speed time base.Something important to note is that this mode actually is only part of Microchip SAMA5 and SAM9x60 family SOCs.

Software overview

On the software side in the Linux kernel, the different functionalities offered by the Microchip TCBs will be handled by three different subsystems, which we cover in the following sections.

Clocksource susbsystem

This subsystem is the core target of any TC module as it allows the kernel to keep track of the time passing (clocksource) and program timer interrupts (clockevents). The Microchip TCB has its upstream implementation in drivers/clocksource/timer-atmel-tcb.c that uses the waveform mode to provide both clock source and clock events. The older Microchip platforms have only 16-bit timer counters, in which case two channels are needed to implement the clocksource support. Newer Microchip platforms have 32-bit timer counters, and in this case only one channel is needed to implement clocksource. In both cases, only one channel is necessary to implement clock events.

In the timer-atmel-tcb driver:

  • The clocksource is registered using a struct clocksource structure which mainly provides a ->read() callback to read the current cycle count
  • The clockevents is registered using a struct tc_clkevt_device structure, which provides callbacks to set the date of the next timer event (->set_next_event()) and to change the mode of the timer (->set_state_shutdown(), ->set_state_periodic(), ->set_state_oneshot()).

From a user-space point of view, the clocksource and clockevents subsystems are not directly visible, but they are of course used whenever one uses time or timer related functions. The available clockevents are visible in /sys/bus/clockevents and the available clocksources are visible in /sys/bus/clocksource. The file /proc/timer_list also gives a lot of information about the timers that are pending, and the available timer devices on the platform.

PWM subsystem

This subsystem is useful for many applications (fan control, leds, beepers etc.), and provides both an in-kernel APIs for other kernel drivers to use, as well as a user-space API in /sys/class/pwm, documented at https://www.kernel.org/doc/html/latest/driver-api/pwm.html.

As far as PWM functionality is concerned, the Microchip TCB module is supported by the driver at drivers/pwm/pwm-atmel-tcb.c, which also uses the waveform mode. In this mode both channels pins TIOAx/TIOBx can be used to output PWM signals which allows to provide up to 6 PWM outputs per TCB. On a high-level, this PWM driver registers a struct pwm_ops structure that provides pointers to the important callback to setup and configure PWM outputs.

The current diver implementation has the drawback of using an entire TCB module as a PWM chip: it is not possible to use 1 channel of a TCB module for PWM, and the other channels of the same TCB module for other functionality. On platforms that have only two TCB modules, this means that the first TCB module is typically used for the clockevents/clocksource functionality described previously, and therefore only the second TCB module can be used for PWM.

We are however working on lifting this limitation: Bootlin engineer Alexandre Belloni has a patch series at https://github.com/alexandrebelloni/linux/commits/at91-tcb to address this. We aim at submitting this patch series in the near future.

Thanks to the changes of this patch series, we will be able to use PWM channels as follows:

  • Configuring a 100KHz PWM signal on TIOAx:
    # echo 0 > /sys/class/pwm/pwmchip0/export
    # echo 10000 > /sys/class/pwm/pwmchip0/pwm0/period
    # echo 1000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle
    # echo 1 > /sys/class/pwm/pwmchip0/pwm0/enable
    
  • Configuring a 100KHz PWM signal on TIOBx:
    # echo 1 > /sys/class/pwm/pwmchip0/export
    # echo 10000 > /sys/class/pwm/pwmchip0/pwm1/period
    # echo 1000 > /sys/class/pwm/pwmchip0/pwm1/duty_cycle
    # echo 1 > /sys/class/pwm/pwmchip0/pwm1/enable
    
  • One must note that both PWM signals of the same channel will share the same period even though we set it twice here as it is required by the PWM framework. The Microchip TCB takes the period from the RC register and RA/RB respectively for TIOAx/TIOBx duty cycles.

    Counter subsystem

    The Linux kernel counter subsystem, located in drivers/counter/ is much newer than the clocksource, clockevents and PWM subsystems described previously. Indeed, it is only in 2019 that it was added to the Linux kernel, and so far it contains only 5 drivers. This subsystem abstracts a timer counter as three entities: a Count that stores the value incremented or decremented from a measured input Signal and a Synapse that will provide edge-based trigger source.

    This subsystem was therefore very relevant to expose the input capture and quadrature decoder modes of the Microchip TCB module, and we recently submitted a patch series that implements a counter driver for the Microchip TCB module. The driver instantiates and registers a struct counter_device structure, with a variety of sub-structures and callbacks that allow the core counter subsystem to use the Microchip TCB module and expose its input capture and quadrature decoder features to user-space.

    The current user-space interface of the counter subsystem works over sysfs and is documented at https://www.kernel.org/doc/html/latest/driver-api/generic-counter.html. For example, to read the position of a rotary encoder connected to a TCB module configured as a quadradure decoder, one would do:

    # cd /sys/bus/counter/devices/counter0/count0/                    
    # echo "quadrature x4" > function                                 
    # cat count
    0
    

    However, when the device connected to the TCB is a rotary encoder, it would be much more useful to have it exposed to user-space as a standard input device so that all existing graphical libraries and frameworks can automatically make use of it. Rotary encoders connected to GPIOs can already be exposed to user-space as input devices using the rotary_encoder driver. Our goal was to achieve the same, but with a rotary encoder connected to a quadrature decoder handled by the counter subsystem. To this end, we submitted a second patch series, which:

    1. Extends the counter subsystem with an in-kernel API, so that counter devices can not only be used from user-space using sysfs, but also from other kernel subsystems. This is very much like the IIO in-kernel API, which is used in a variety of other kernel subsystems that need access to IIO devices.
    2. A new rotary-encoder-counter driver, which implements an input device based on a counter device configured in quadrature decoder mode.

    Thanks to this driver, we get an input device for our rotary encoder, which can for example be tested using evtest to decode the input events that occur when rotating the rotary encoder:

    # evtest /dev/input/event1                                        
    Input driver version is 1.0.1                                     
    Input device ID: bus 0x19 vendor 0x0 product 0x0 version 0x0      
    Input device name: "rotary@0"                                     
    Supported events:                                                 
    Event type 0 (EV_SYN)                                           
    Event type 2 (EV_REL)                                           
      Event code 0 (REL_X)                                          
    Properties:                                                       
    Testing ... (interrupt to exit)                                   
    Event: time 1325392910.906948, type 2 (EV_REL), code 0 (REL_X), value 2
    Event: time 1325392910.906948, -------------- SYN_REPORT ------------
    Event: time 1325392911.416973, type 2 (EV_REL), code 0 (REL_X), value 1
    Event: time 1325392911.416973, -------------- SYN_REPORT ------------
    Event: time 1325392913.456956, type 2 (EV_REL), code 0 (REL_X), value 2
    Event: time 1325392913.456956, -------------- SYN_REPORT ------------
    Event: time 1325392916.006937, type 2 (EV_REL), code 0 (REL_X), value 1
    Event: time 1325392916.006937, -------------- SYN_REPORT ------------
    Event: time 1325392919.066977, type 2 (EV_REL), code 0 (REL_X), value 1
    Event: time 1325392919.066977, -------------- SYN_REPORT ------------
    Event: time 1325392919.576988, type 2 (EV_REL), code 0 (REL_X), value 2
    Event: time 1325392919.576988, -------------- SYN_REPORT ------------      
    

    Device Tree

    From a Device Tree point of view, the representation is a bit more complicated than for many other hardware blocks, due to the multiple features offered by timer counters. First of all, in the .dtsi file describing the system-on-chip, we have a node that describes each TCB module. For example, for the Microchip SAMA5D2 system-on-chip, which has two TCB modules, we have in arch/arm/boot/dts/sama5d2.dtsi:

    tcb0: timer@f800c000 {
    	compatible = "atmel,at91sam9x5-tcb", "simple-mfd", "syscon";
    	#address-cells = <1>;
    	#size-cells = <0>;
    	reg = <0xf800c000 0x100>;
    	interrupts = <35 IRQ_TYPE_LEVEL_HIGH 0>;
    	clocks = <&pmc PMC_TYPE_PERIPHERAL 35>, <&clk32k>;
    	clock-names = "t0_clk", "slow_clk";
    };
    
    tcb1: timer@f8010000 {
    	compatible = "atmel,at91sam9x5-tcb", "simple-mfd", "syscon";
    	#address-cells = <1>;
    	#size-cells = <0>;
    	reg = <0xf8010000 0x100>;
    	interrupts = <36 IRQ_TYPE_LEVEL_HIGH 0>;
    	clocks = <&pmc PMC_TYPE_PERIPHERAL 36>, <&clk32k>;
    	clock-names = "t0_clk", "slow_clk";
    };
    

    This however does not define how each TCB module and each channel is going to be used. This happens at the board level, by adding sub-nodes to the appropriate TCB module node.

    First, each board needs to at least define which TCB module and channels should be used for the clocksource/clockevents. For example, arch/arm/boot/dts/at91-sama5d2_xplained.dts has:

    tcb0: timer@f800c000 {
    	timer0: timer@0 {
    		compatible = "atmel,tcb-timer";
    		reg = <0>;
    	};
    
    	timer1: timer@1 {
    		compatible = "atmel,tcb-timer";
    		reg = <1>;
    	};
    };
    

    As can be seen in this example, the timer@0 and timer@1 node are sub-nodes of the timer@f800c000 node. The SAMA5D2 has 32-bit timer counters, so only one channel is needed for the clocksource, and another channel is needed for clock events. Older platforms such as AT91SAM9260 would need:

    tcb0: timer@fffa0000 {
    	timer@0 {
    		compatible = "atmel,tcb-timer";
    		reg = <0>, <1>;
    	};
    
    	timer@2 {
    		compatible = "atmel,tcb-timer";
    		reg = <2>;
    	};
    };
    

    Where the first instance of atmel,tcb-timer uses two channels: on AT91SAM9260, each channel is only 16-bit, so we need two channels for clocksource. This is why we have reg = <0>, <1> in the first sub-node.

    Now, to use some TCB channels as PWMs, with the new patch series proposed by Alexandre, one would for example use:

    &tcb1 {
    	tcb1_pwm0: pwm@0 {
    		compatible = "atmel,tcb-pwm";
    		#pwm-cells = <3>;
    		reg = <0>;
    		pinctrl-names = "default";
    		pinctrl-0 = <&pinctrl_tcb1_tioa0 &pinctrl_tcb1_tiob0>;
    	};
    
    	tcb1_pwm1: pwm@1 {
    		compatible = "atmel,tcb-pwm";
    		#pwm-cells = <3>;
    		reg = <1>;
    		pinctrl-names = "default";
    		pinctrl-0 = <&pinctrl_tcb1_tioa1>;
    	};
    };
    

    To use the two first channels of TCB1 as PWMs. This would provide two separate PWM devices visible to user-space, and to other kernel drivers.

    Otherwise, to use a TCB as a quadrature decoder, one would use the following piece of Device Tree. Note that we must use the TCB0 module as it is the only one that supports quadrature decoding. This means that the atmel,tcb-timer nodes for clocksource/clockevents support have to use TCB1.

    &tcb0 {
    	qdec: counter@0 {
    		compatible = "atmel,tcb-capture";
    		reg = <0>, <1>;
    		pinctrl-names = "default";
    		pinctrl-0 = <&pinctrl_qdec_default>;
    	};
    };
    

    A quadrature decoder needs two channels, hence the reg = <0>, <1>.

    And if in addition you would like to setup an input device for the rotary encoder connected to the quadrature decoder, you can add:

    rotary@0 {
    	compatible = "rotary-encoder-counter";
    	counter = <&qdec>;
    	qdec-mode = <7>;
    	poll-interval = <50>;
    };
    

    Note that this is not a sub-node of the TCB node, the rotary encoder needs to be described at the top-level of the Device Tree, and has a reference to the TCB channels used as quadrature decoder by means of the counter = <&qdec>; phandle.

    Of course, these different capabilities can be combined. For example, you could use the first two channels of TCB0 to implement a quadrature decoder using the counter subsystem, and the third channel of the same TCB module for a PWM. TCB1 is used for clocksource/clockevents. In this case, the Device Tree would look like this:

    &tcb0 {
    	counter@0 {
    		compatible = "atmel,tcb-capture";
    		reg = <0>, <1>;
    		pinctrl-names = "default";
    		pinctrl-0 = <&pinctrl_qdec_default>;
    	};
    
    	pwm@2 {
    		compatible = "atmel,tcb-pwm";
    		#pwm-cells = <3>;
    		reg = <2>;
    		pinctrl-names = "default";
    		pinctrl-0 = <&pinctrl_tcb1_tioa1>;
    	};
    };
    
    &tcb1 {
    	timer@0 {
    		compatible = "atmel,tcb-timer";
    		reg = <0>, <1>;
    	};
    
    	timer@2 {
    		compatible = "atmel,tcb-timer";
    		reg = <2>;
    	};
    };
    

    Conclusion

    We hope that this blog post was useful to understand how Linux handles timer counters, and what are the Linux kernel subsystems that are involved. Even though we used the Microchip TCB to illustrate our discussion, the concepts all apply to the timer counters of other platforms that would offer similar features.

Audio multi-channel routing and mixing using alsalib

ALSA logoRecently, one of our customers designing an embedded Linux system with specific audio needs had a use case where they had a sound card with more than one audio channel, and they needed to separate individual channels so that they can be used by different applications. This is a fairly common use case, we would like to share in this blog post how we achieved this, for both input and output audio channels.

The most common use case would be separating a 4 or 8-channel sound card in multiple stereo PCM devices. For this, alsa-lib, the userspace API interface to the ALSA drivers, provides PCM plugins. Those plugins are configured through configuration files that are usually known to be /etc/asound.conf or $(HOME)/.asoundrc. However, through the configuration of /usr/share/alsa/alsa.conf, it is also possible, and in fact recommended to use a card-specific configuration, named /usr/share/alsa/cards/<card_name>.conf.

The syntax of this configuration is documented in the alsa-lib configuration documentation, and the most interesting part of the documentation for our purpose is the pcm plugin documentation.

Audio inputs

For example, let’s say we have a 4-channel input sound card, which we want to split in 2 mono inputs and one stereo input, as follows:

Audio input example

In the ALSA configuration file, we start by defining the input pcm:

pcm_slave.ins {
	pcm "hw:0,1"
	rate 44100
	channels 4
}

pcm "hw:0,1" refers to the the second subdevice of the first sound card present in the system. In our case, this is the capture device. rate and channels specify the parameters of the stream we want to set up for the device. It is not strictly necessary but this allows to enable automatic sample rate or size conversion if this is desired.

Then we can split the inputs:

pcm.mic0 {
	type dsnoop
	ipc_key 12342
	slave ins
	bindings.0 0
}

pcm.mic1 {
	type plug
	slave.pcm {
		type dsnoop
		ipc_key 12342
		slave ins
		bindings.0 1
	}
}

pcm.mic2 {
	type dsnoop
	ipc_key 12342
	slave ins
	bindings.0 2
	bindings.1 3
}

mic0 is of type dsnoop, this is the plugin splitting capture PCMs. The ipc_key is an integer that has to be unique: it is used internally to share buffers. slave indicates the underlying PCM that will be split, it refers to the PCM device we have defined before, with the name ins. Finally, bindings is an array mapping the PCM channels to its slave channels. This is why mic0 and mic1, which are mono inputs, both only use bindings.0, while mic2 being stereo has both bindings.0 and bindings.1. Overall, mic0 will have channel 0 of our input PCM, mic1 will have channel 1 of our input PCM, and mic2 will have channels 2 and 3 of our input PCM.

The final interesting thing in this example is the difference between mic0 and mic1. While mic0 and mic2 will not do any conversion on their stream and pass it as is to the slave pcm, mic1 is using the automatic conversion plugin, plug. So whatever type of stream will be requested by the application, what is provided by the sound card will be converted to the correct format and rate. This conversion is done in software and so runs on the CPU, which is usually something that should be avoided on an embedded system.

Also, note that the channel splitting happens at the dsnoop level. Doing it at an upper level would mean that the 4 channels would be copied before being split. For example the following configuration would be a mistake:

pcm.dsnoop {
    type dsnoop
    ipc_key 512
    slave {
        pcm "hw:0,0"
        rate 44100
    }
}

pcm.mic0 {
    type plug
    slave dsnoop
    ttable.0.0 1
}

pcm.mic1 {
    type plug
    slave dsnoop
    ttable.0.1 1
}

Audio outputs

For this example, let’s say we have a 6-channel output that we want to split in 2 mono outputs and 2 stereo outputs:

Audio output example

As before, let’s define the slave PCM for convenience:

pcm_slave.outs {
	pcm "hw:0,0"
	rate 44100
	channels 6
}

Now, for the split:

pcm.out0 {
	type dshare
	ipc_key 4242
	slave outs
	bindings.0 0
}

pcm.out1 {
	type plug {
	slave.pcm {
		type dshare
		ipc_key 4242
		slave outs
		bindings.0 1
	}
}

pcm.out2 {
	type dshare
	ipc_key 4242
	slave outs
	bindings.0 2
	bindings.0 3
}

pcm.out3 {
	type dmix
	ipc_key 4242
	slave outs
	bindings.0 4
	bindings.0 5
}

out0 is of type dshare. While usually dmix is presented as the reverse of dsnoop, dshare is more efficient as it simply gives exclusive access to channels instead of potentially software mixing multiple streams into one. Again, the difference can be significant in terms of CPU utilization in the embedded space. Then, nothing new compared to the audio input example before:

  • out1 is allowing sample format and rate conversion
  • out2 is stereo
  • out3 is stereo and allows multiple concurrent users that will be mixed together as it is of type dmix

A common mistake here would be to use the route plugin on top of dmix to split the streams: this would first transform the mono or stereo stream in 6-channel streams and then mix them all together. All these operations would be costly in CPU utilization while dshare is basically free.

Duplicating streams

Another common use case is trying to copy the same PCM stream to multiple outputs. For example, we have a mono stream, which we want to duplicate into a stereo stream, and then feed this stereo stream to specific channels of a hardware device. This can be achieved using the following configuration snippet:

pcm.out4 {
	type route;
	slave.pcm {
	type dshare
		ipc_key 4242
		slave outs
		bindings.0 0
		bindings.1 5
	}
	ttable.0.0 1;
	ttable.0.1 1;
}

The route plugin allows to duplicate the mono stream into a stereo stream, using the ttable property. Then, the dshare plugin is used to get the first channel of this stereo stream and send it to the hardware first channel (bindings.0 0), while sending the second channel of the stereo stream to the hardware sixth channel (bindings.1 5).

Conclusion

When properly used, the dsnoop, dshare and dmix plugins can be very efficient. In our case, simply rewriting the alsalib configuration on an i.MX6 based system with a 16-channel sound card dropped the CPU utilization from 97% to 1-3%, leaving plenty of CPU time to run further audio processing and other applications.

Bootlin toolchains updated, edition 2020.02

Bootlin provides a large number of ready-to-use pre-built cross-compilation toolchains at toolchains.bootlin.com. We announced the service in June 2017, and released multiple versions of the toolchains up to 2018.11.

After a long pause, we are happy to announce that we have released a new set of toolchains, built using Buildroot 2020.02, and therefore labelled as 2020.02, even though they have been published in April. They are available for 38 CPU architectures or architecture variants, supporting the glibc, uclibc-ng and musl C libraries when possible.

For each toolchain, we offer two variants: one called stable which uses “proven” versions of gcc, binutils and gdb, and one called bleeding edge which uses the latest version of gcc, binutils and gdb.

Overall, these 2020.02 toolchains use:

  • gcc 8.4.0 for stable, 9.3.0 for bleeding edge
  • binutils 2.32 for stable, 2.33.1 for bleeding edge
  • gdb 8.2.1 for stable, 8.3 for bleeding edge
  • linux headers 4.4.215 for stable, 4.19.107 for bleeding edge
  • glibc 2.30
  • uclibc-ng 1.0.32
  • musl 1.1.24

2020.02 toolchains

In total, that’s 154 different toolchains that we are providing! If you are using these toolchains and face any issue, or want to request some additional change of feature, do not hesitate to contact us through the corresponding Github project. Also, I’d like to thank Romain Naour, from Smile for his contributions to this project.

SFP modules on a board running Linux

We recently worked on Linux support for a custom hardware platform based on the Texas Instruments AM335x system-on-chip, with a somewhat special networking setup: each of the two ports of the AM335x Ethernet MAC was connected to a Microchip VSC8572 Ethernet PHY, which itself allowed to access an SFP cage. In addition, the I2C buses connected to the SFP cages, which are used at runtime to communicate with the inserted SFP modules, instead of being connected to an I2C controller of the system-on-chip as they usually are, where connected to the I2C controller embedded in the VSC8572 PHYs.

The below diagram depicts the overall hardware layout:

Our goal was to use Linux and to offer runtime dynamic reconfiguration of the networking links based the SFP module plugged in. To achieve this we used, and extended, a combination of Linux kernel internal frameworks such as Phylink or the SFP bus support; and of networking device drivers. In this blog post, we’ll share some background information about these technologies, the challenges we faced and our current status.

Introduction to the SFP interface

SFP moduleThe small form-factor pluggable (SFP) is a hot-pluggable network interface module. Its electrical interface and its form-factor are well specified, which allows industry players to build platforms that can host SFP modules, and be sure that they will be able to use any available SFP module on the market. It is commonly used in the networking industry as it allows connecting various types of transceivers to a fixed interface.

A SFP cage provides in addition to data signals a number of control signals:

  • a Tx_Fault pin, for transmitter fault indication
  • a Tx_Disable pin, for disabling optical output
  • a MOD_Abs pin, to detect the absence of a module
  • an Rx_LOS pin, to denote a receiver loss of signal
  • a 2-wire data and clock lines, used to communicate with the modules

Modules plugged into SFP cages can be direct attached cables, in which case they do not have any built-in transceiver, or they can include a transceiver (i.e an embedded PHY), which transforms the signal into another format. This means that in our setup, there can be two PHYs between the Ethernet MAC and the physical medium: the Microchip VSC8572 PHY and the PHY embedded into the SFP module that is plugged in.

All SFP modules embed an EEPROM, accessible at a standardized I2C address and with a standardized format, which allows the host system to discover which SFP modules are connected what are their capabilities. In addition, if the SFP modules contains an embedded PHY, it is also accessible through the same I2C bus.

Challenges

We had to overcome a few challenges to get this setup working, using a mainline Linux kernel.

As we discussed earlier, having SFP modules meant the whole MAC-PHY-SFP link has to be reconfigured at runtime, as the PHY in the SFP module is hot-pluggable. To solve this issue a framework called Phylink, was introduced in mid-2017 to represent networking links and allowing their component to share states and to be reconfigured at runtime. For us, this meant we had to first convert the CPSW MAC driver to use this phylink framework. For a detailed explanation of what composes Ethernet links and why Phylink is needed, we gave a talk at the Embedded Linux Conference Europe in 2018. While we were working on this and after we first moved the CPSW MAC driver to use Phylink, this driver was rewritten and a new CPSW MAC driver was sent upstream (CONFIG_TI_CPSW vs CONFIG_TI_CPSW_SWITCHDEV). We are still using the old driver for now, and this is why we did not send our patches upstream as we think it does not make sense to convert a driver which is now deprecated.

A second challenge was to integrate the 2-wire capability of the VSC8572 PHY into the networking PHY and SFP common code, as our SFP modules I2C bus is connected to the PHY and not an I2C controller from the system-on-chip. We decided to expose this PHY 2-wire capability as an SMBus controller, as the functionality offered by the PHY does not make it a fully I2C compliant controller.

Outcome

The challenges described above made the project quite complex overall, but we were able to get SFP modules working, and to dynamically switch modes depending on the capabilities of the one currently plugged-in. We tested with both direct attached cables and a wide variety of SFP modules of different speeds and functionality. At the moment only a few patches were sent upstream, but we’ll contribute more over time.

For an overview of some of the patches we made and used, we pushed a branch on Github (be aware those patches aren’t upstream yet and they will need some further work to be acceptable upstream). Here is the details of the patches:

In terms of Device Tree representation, we first have a description of the two SFP cages. They describe the different GPIOs used for the control signals, as well as the I2C bus that goes to each SFP cage. Note that the gpio_sfp is a GPIO expander, itself on I2C, rather than directly GPIOs of the system-on-chip.

/ {
       sfp_eth0: sfp-eth0 {
               compatible = "sff,sfp";
               i2c-bus = <&phy0>;
               los-gpios = <&gpio_sfp 3 GPIO_ACTIVE_HIGH>;
               mod-def0-gpios = <&gpio_sfp 4 GPIO_ACTIVE_LOW>;
               tx-disable-gpios = <&gpio_sfp 5 GPIO_ACTIVE_HIGH>;
               tx-fault-gpios = <&gpio_sfp 6 GPIO_ACTIVE_HIGH>;
       };

       sfp_eth1: sfp-eth1 {
               compatible = "sff,sfp";
               i2c-bus = <&phy1>;
               los-gpios = <&gpio_sfp 10 GPIO_ACTIVE_HIGH>;
               mod-def0-gpios = <&gpio_sfp 11 GPIO_ACTIVE_LOW>;
               tx-disable-gpios = <&gpio_sfp 13 GPIO_ACTIVE_HIGH>;
               tx-fault-gpios  = <&gpio_sfp 12 GPIO_ACTIVE_HIGH>;
       };
};

Then the MAC is described as follows:

&mac {
      pinctrl-names = "default";
       pinctrl-0 = <&cpsw_default>;
       status = "okay";
       dual_emac;
};

&cpsw_emac0 {
       status = "okay";
       phy = <&phy0>;
       phy-mode = "rgmii-id";
       dual_emac_res_vlan = <1>;
};

&cpsw_emac1 {
       status = "okay";
       phy = <&phy1>;
       phy-mode = "rgmii-id";
       dual_emac_res_vlan = <2>;
};

So we have both ports of the MAC enabled with a RGMII interface to the PHY. And finally the MDIO bus of the system-on-chip is described as follows. We have two sub-nodes, one for each VSC8572 PHY, respectively at address 0x0 and 0x1 on the CPSW MDIO bus. Each PHY is connected to its respective SFP cage node (sfp_eth0 and sfp_eth1) and provides access to the SFP EEPROM as regular EEPROMs.

&davinci_mdio {
       pinctrl-names = "default";
       pinctrl-0 = <&davinci_mdio_default>;
       status = "okay";

       phy0: ethernet-phy@0 {
               #address-cells = <1>;
               #size-cells = <0>;

               reg = <0>;
               fiber-mode;
               vsc8584,los-active-low;
               sfp = <&sfp_eth0>;

               sfp0_eeprom: eeprom@50 {
                       compatible = "atmel,24c02";
                       reg = <0x50>;
                       read-only;
               };

               sfp0_eeprom_ext: eeprom@51 {
                       compatible = "atmel,24c02";
                       reg = <0x51>;
                       read-only;
               };
       };

       phy1: ethernet-phy@1 {
               #address-cells = <1>;
               #size-cells = <0>;

               reg = <1>;
               fiber-mode;
               vsc8584,los-active-low;
               sfp = <&sfp_eth1>;

               sfp1_eeprom: eeprom@50 {
                       compatible = "atmel,24c02";
                       reg = <0x50>;
                       read-only;
               };

               sfp1_eeprom_ext: eeprom@51 {
                       compatible = "atmel,24c02";
                       reg = <0x51>;
                       read-only;
               };
       };
};

Conclusion

While we are still working on pushing all of this work upstream, we’re happy to have been able to work on these topics. Do not hesitate to reach out of to us if you have projects that involve Linux and SFP modules!

Linux 5.6, Bootlin contributions inside

Linux 5.6 was released last Sunday. As usual, LWN has the best coverage of the new features merged in this release: part 1 and part 2. Sadly, the corresponding KernelNewbies page has not yet been updated with the usual very interesting summary of the important changes.

Bootlin contributed a total of 95 patches to this release, which makes us the 27th contributing company by number of commits, according to the statistics. The main highlights of our contributions are:

  • Our work on supporting hardware-offloading of MACsec encryption/decryption in the networking subsystem and support for this offloading for some Microchip/Vitesse PHYs has been merged. See our previous blog post for more details about this work done by Bootlin engineer Antoine Ténart
  • As part of our work on the Rockchip PX30 system-on-chip, we contributed support for LVDS display on Rockchip PX30, and support for the Satoz SAT050AT40H12R2 panel. This work was done by Miquèl Raynal
  • Alexandre Belloni as the RTC maintainer did his usual number of cleanup and improvements to existing RTC drivers
  • We did a number of small contributions to the Microchip AT91/SAMA5 platform: support for the Smartkiz platform from Overkiz, phylink improvements in the macb driver, etc.
  • Paul Kocialkowski improved the Intel GMA 500 DRM driver to support page flip.
  • Paul Kocialkowski contributed support for the Xylon LogiCVC GPIO controller, which is a preliminary step to contributing the Xylon LogiCVC display controller support. See our blog post on this topic.

In addition to being contributors, a number of Bootlin engineers are also maintainers of various parts of the Linux kernel, and as such:

  • Alexandre Belloni, as the RTC subsystem maintainer and Microchip platforms co-maintainer, has reviewed and merged 55 patches from other contributors
  • Miquèl Raynal, as the MTD co-maintainer, has reviewed and merged 21 patches from other contributors
  • Grégory Clement, as the Marvell EBU platform co-maintainer, has reviewed and merged 12 patches from other contributors

Here is the detail of all our contributions:

Covid-19: Bootlin proposes online sessions for all its courses

Tux working on embedded Linux on a couchLike most of us, due to the Covid-19 epidemic, you may be forced to work from home. To take advantage from this time confined at home, we are now proposing all our training courses as online seminars. You can then benefit from the contents and quality of Bootlin training sessions, without leaving the comfort and safety of your home. During our online seminars, our instructors will alternate between presentations and practical demonstrations, executing the instructions of our practical labs.

At any time, participants will be able to ask questions.

We can propose such remote training both through public online sessions, open to individual registration, as well as dedicated online sessions, for participants from the same company.

Public online sessions

We’re trying to propose time slots that should be manageable from Europe, Middle East, Africa and at least for the East Coast of North America. All such sessions will be taught in English. As usual with all our sessions, all our training materials (lectures and lab instructions) are freely available from the pages describing our courses.

Our Embedded Linux and Linux kernel courses are delivered over 7 half days of 4 hours each, while our Yocto Project, Buildroot and Linux Graphics courses are delivered over 4 half days. For our embedded Linux and Yocto Project courses, we propose an additional date in case some extra time is needed to complete the agenda.

Here are all the available sessions. If the situation lasts longer, we will create new sessions as needed:

Type Dates Time Duration Expected trainer Cost and registration
Embedded Linux (agenda) Sep. 28, 29, 30, Oct. 1, 2, 5, 6 2020. 17:00 – 21:00 (Paris), 8:00 – 12:00 (San Francisco) 28 h Michael Opdenacker 829 EUR + VAT* (register)
Embedded Linux (agenda) Nov. 2, 3, 4, 5, 6, 9, 10, 12, 2020. 14:00 – 18:00 (Paris), 8:00 – 12:00 (New York) 28 h Michael Opdenacker 829 EUR + VAT* (register)
Linux kernel (agenda) Nov. 16, 17, 18, 19, 23, 24, 25, 26 14:00 – 18:00 (Paris time) 28 h Alexandre Belloni 829 EUR + VAT* (register)
Yocto Project (agenda) Nov. 30, Dec. 1, 2, 3, 4, 2020 14:00 – 18:00 (Paris time) 16 h Maxime Chevallier 519 EUR + VAT* (register)
Buildroot (agenda) Dec. 7, 8, 9, 10, and 11, 2020 14:00 – 18:00 (Paris time) 16 h Thomas Petazzoni 519 EUR + VAT* (register)
Linux Graphics (agenda) Dec. 1, 2, 3, 4, 2020 14:00 – 18:00 (Paris time) 16 h Paul Kocialkowski 519 EUR + VAT* (register

* VAT: applies to businesses in France and to individuals from all countries. Businesses in the European Union won’t be charged VAT only if they provide valid company information and VAT number to Evenbrite at registration time. For businesses in other countries, we should be able to grant them a VAT refund, provided they send us a proof of company incorporation before the end of the session.

Each public session will be confirmed once there are at least 6 participants. If the minimum number of participants is not reached, Bootlin will propose new dates or a full refund (including Eventbrite fees) if no new date works for the participant.

We guarantee that the maximum number of participants will be 12.

Dedicated online sessions

If you have enough people to train, such dedicated sessions can be a worthy alternative to public ones:

  • Flexible dates and daily durations, corresponding to the availability of your teams.
  • Confidentiality: freedom to ask questions that are related to your company’s projects and plans.
  • If time left, possibility to have knowledge sharing time with the instructor, that could go beyond the scope of the training course.
  • Language: possibility to have a session in French instead of in English.

Online seminar details

Each session will be given through Jitsi Meet, a Free Software solution that we are trying to promote. As a backup solution, we will also be able to Google Hangouts Meet. Each participant should have her or his own connection and computer (with webcam and microphone) and if possible headsets, to avoid echo issues between audio input and output. This is probably the best solution to allow each participant to ask questions and write comments in the chat window. We also support people connecting from the same conference room with suitable equipment.

Each participant is asked to connect 15 minutes before the session starts, to make sure her or his setup works (instructions will be sent before the event).

How to register

For online public sessions, use the EventBrite links in the above list of sessions to register one or several individuals.

To register an entire group (for dedicated sessions), please contact training@bootlin.com and tell us the type of session you are interested in. We will then send you a registration form to collect all the details we need to send you a quote.

You can also ask all your questions by calling +33 484 258 097.

Questions and answers

Q : Should I order hardware in advance, our hardware included in the training cost?
R : No, practical labs are replaced by technical demonstrations, so you will be able to follow the course without any hardware. However, you can still order the hardware by checking the “Shopping list” pages of presentation materials for each session. This way, between each session, you will be able to replay by yourself the labs demonstrated by your trainer, ask all your questions, and get help between sessions through our dedicated Matrix channel to reach your goals.

Q: Why just demos instead of practicing with real hardware?
A: We are not ready to support a sufficient number of participants doing practical labs remotely with real hardware. This is more complicated and time consuming than in real life. Hence, what we we’re proposing is to replace practical labs with practical demonstrations shown by the instructor. The instructor will go through the normal practical labs with the standard hardware that we’re using.

Q: Would it be possible to run practical labs on the QEMU emulator?
R: Yes, it’s coming. In the embedded Linux course, we are already offering instructions to run most practical labs on QEMU between the sessions, before the practical demos performed by the trainer. We should also be able to propose such instructions for our Yocto Project and Buildroot training courses in the next months. Such work is likely to take more time for our Linux kernel course, practical labs being closer to the hardware that we use.

Q: Why proposing half days instead of full days?
A: From our experience, it’s very difficult to stay focused on a new technical topic for an entire day without having periodic moments when you are active (which happens in our public and on-site sessions, in which we interleave lectures and practical labs). Hence, we believe that daily slots of 4 hours (with a small break in the middle) is a good solution, also leaving extra time for following up your normal work.

Building a Linux system for the STM32MP1: remote firmware updates

After another long break, here is our new article in the series of blog posts about building a Linux system for the STM32MP1 platform. After showing how to build a minimal Linux system for the STM32MP157 platform, how to connect and use an I2C based pressure/temperature/humidity sensor and how to integrate Qt5 in our system, how to set up a development environment to write our own Qt5 application, how to develop a Qt5 application, and how to setup factory flashing, we are now going to discuss the topic of in-field firmware update.

List of articles in this series:

  1. Building a Linux system for the STM32MP1: basic system
  2. Building a Linux system for the STM32MP1: connecting an I2C sensor
  3. Building a Linux system for the STM32MP1: enabling Qt5 for graphical applications
  4. Building a Linux system for the STM32MP1: setting up a Qt5 application development environment
  5. Building a Linux system for the STM32MP1: developing a Qt5 graphical application
  6. Building a Linux system for the STM32MP1: implementing factory flashing
  7. Building a Linux system for the STM32MP1: remote firmware updates

Why remote firmware updates?

The days and age when it was possible to build and flash an embedded system firmware, ship the device and forget it, are long behind us. Systems have gotten more complicated, and we therefore have to fix bugs and security issues after the device has been shipped, and we often want to deploy new features in the field into existing devices. For all those reasons, the ability to remotely update the firmware of embedded devices is now a must-have.

Open-source firmware update tools

There are different possibilities to update your system:

  • If you’re using a binary distribution, use the package manager of this distribution to update individual components
  • Do complete system image updates, at the block-level, replacing the entire system image with an updated one. Three main open-source solutions are available: swupdate, Mender.io and RAUC.
  • Do file-based updates, with solutions such as OSTree.

In this blog post, we are going to show how to set up the swupdate solution.

swupdate is a tool installed on the target that can receive an update image (.swu file), either from a local media or from a remote server, and use it to update various parts of the system. Typically, it will be used to update the Linux kernel and the root filesystem, but it can also be used to update additional partitions, FPGA bitstreams, etc.

swupdate implements two possible update strategies:

  • A dual copy strategy, where the storage has enough space to store two copies of the entire filesystem. This allows to run the system from copy A, update copy B, and reboot it into copy B. The next update will of course update copy A.
  • A single copy strategy, where the upgrade process consists in rebooting into a minimal system that runs entirely from RAM, and that will be responsible for updating the system on storage.

For this blog post, we will implement the dual copy strategy, but the single copy strategy is also supported for systems with tighter storage restrictions.

We are going to setup swupdate step by step: first by triggering updates locally, and then seeing how to trigger updates remotely.

Local usage of swupdate

Add USB storage support

As a first step, in order to transfer the update image to the target, we will use a USB stick. This requires having USB mass storage support in the Linux kernel. So let’s adjust our Linux kernel configuration by running make linux-menuconfig. Within the Linux kernel configuration:

  • Enable the CONFIG_SCSI option. This is a requirement for USB mass storage support
  • Enable the CONFIG_BLK_DEV_SD option, needed for SCSI disk support, which is another requirement for USB mass storage.
  • Enable the CONFIG_USB_STORAGE option.
  • The CONFIG_VFAT_FS option, to support the FAT filesystem, is already enabled.
  • Enable the CONFIG_NLS_CODEPAGE_437 and CONFIG_NLS_ISO8859_1 options, to have the necessary support to decode filenames in the FAT filesystem.

Then, run make linux-update-defconfig to preserve these kernel configurations changes in your kernel configuration file at board/stmicroelectronics/stm32mp157-dk/linux.config.

swupdate setup

In Target packages, System tools, enable swupdate. You can disable the install default website setting since we are not going to use the internal swupdate web server.

Take this opportunity to also enable the gptfdisk tool and its sgdisk sub-option in the Hardware handling submenu. We will need this tool later to update the partition table at the end of the update process.

Now that we have both both USB storage support and the swupdate package enabled, let’s build a new version of our system by running make. Flash the resulting image on your SD card, and boot your target. You should have swupdate available:

# swupdate -h
Swupdate v2018.11.0

Licensed under GPLv2. See source distribution for detailed copyright notices.

swupdate (compiled Mar  4 2020)
Usage swupdate [OPTION]
 -f, --file           : configuration file to use
 -p, --postupdate               : execute post-update command
 -e, --select , : Select software images set and source
                                  Ex.: stable,main
 -i, --image          : Software to be installed
 -l, --loglevel          : logging level
 -L, --syslog                   : enable syslog logger
 -n, --dry-run                  : run SWUpdate without installing the software
 -N, --no-downgrading  : not install a release older as 
 -o, --output      : saves the incoming stream
 -v, --verbose                  : be verbose, set maximum loglevel
     --version                  : print SWUpdate version and exit
 -c, --check                    : check image and exit, use with -i 
 -h, --help                     : print this help and exit
 -w, --webserver [OPTIONS]      : Parameters to be passed to webserver
	mongoose arguments:
	  -l, --listing                  : enable directory listing
	  -p, --port               : server port number  (default: 8080)
	  -r, --document-root      : path to document root directory (default: .)
	  -a, --api-version [1|2]        : set Web protocol API to v1 (legacy) or v2 (default v2)
	  --auth-domain                  : set authentication domain if any (default: none)
	  --global-auth-file             : set authentication file if any (default: none)

Take a USB stick with a FAT filesystem on it, which you can mount:

# mount /dev/sda1 /mnt

If that works, we’re now ready to move on to the next step of actually getting a firmware update image.

Generate the swupdate image

swupdate has its own update image format, and you need to generate an image that complies with this format so that swupdate can use it to upgrade your system. The format is simple: it’s a CPIO archive, which contains one file named sw-description describing the contents of the update image, and one or several additional files that are the images to update.

First, let’s create our sw-description file in board/stmicroelectronics/stm32mp157-dk/sw-description. The tags and properties available are described in the swupdate documentation.

software = {
	version = "0.1.0";
	rootfs = {
		rootfs-1: {
			images: (
			{
				filename = "rootfs.ext4.gz";
				compressed = true;
				device = "/dev/mmcblk0p4";
			});
		}
		rootfs-2: {
			images: (
			{
				filename = "rootfs.ext4.gz";
				compressed = true;
				device = "/dev/mmcblk0p5";
			});
		}
	}
}

This describes a single software component rootfs, which is available as two software collections, to implement the dual copy mechanism. The root filesystem will have one copy in /dev/mmcblk0p4 and another copy in /dev/mmcblk0p5. They will be updated from a compressed image called rootfs.ext4.gz.

Once this sw-description file is written, we can write a small script that generates the swupdate image. We’ll put this script in board/stmicroelectronics/stm32mp157-dk/gen-swupdate-image.sh:

#!/bin/sh

BOARD_DIR=$(dirname $0)

cp ${BOARD_DIR}/sw-description ${BINARIES_DIR}

IMG_FILES="sw-description rootfs.ext4.gz"

pushd ${BINARIES_DIR}
for f in ${IMG_FILES} ; do
	echo ${f}
done | cpio -ovL -H crc > buildroot.swu
popd

It simply copies the sw-description file to BINARIES_DIR (which is output/images), and then creates a buildroot.swu CPIO archive that contains the sw-description and rootfs.ext4.gz files.

Of course, make sure this script has executable permissions.

Then, we need to slightly adjust our Buildroot configuration, so run make menuconfig, and:

  • In System configuration, in the option Custom scripts to run after creating filesystem images, add board/stmicroelectronics/stm32mp157-dk/gen-swupdate-image.sh after the existing value support/scripts/genimage.sh. This will make sure our new script generating the swupdate image is executed as a post-image script, at the end of the build.
  • In Filesystem images, enable the gzip compression method for the ext2/3/4 root filesystem, so that a rootfs.ext4.gz image is generated.

With that in place, we are now able to generate our firmware image, by simply running make in Buildroot. At the end of the build, the output/images/ folder should contain the sw-description and rootfs.ext4.gz files. You can look at the contents of buildroot.swu:

$ cat output/images/buildroot.swu | cpio -it
sw-description
rootfs.ext4.gz
58225 blocks

Partioning scheme and booting logic

We now need to adjust the partitioning scheme of our SD card so that it has two partitions for the root filesystem, one for each copy. This partitioning scheme is defined in board/stmicroelectronics/stm32mp157-dk/genimage.cfg, which we change to:

image sdcard.img {
        hdimage {
                gpt = "true"
        }

        partition fsbl1 {
                image = "tf-a-stm32mp157c-dk2.stm32"
        }

        partition fsbl2 {
                image = "tf-a-stm32mp157c-dk2.stm32"
        }

        partition ssbl {
                image = "u-boot.stm32"
        }

        partition rootfs1 {
                image = "rootfs.ext4"
                partition-type = 0x83
                bootable = "yes"
                size = 256M
        }

        partition rootfs2 {
                partition-type = 0x83
                size = 256M
        }
}

As explained in the first blog post of this series, the /boot/extlinux/extlinux.conf file is read by the bootloader to know how to boot the system. Among other things, this file defines the Linux kernel command line, which contains root=/dev/mmcblk0p4 to tell the kernel where the root filesystem is. But with our dual copy upgrade scheme, the root filesystem will sometimes be on /dev/mmcblk0p4, sometimes on /dev/mmcblk0p5. To achieve that without constantly updating the extlinux.conf file, we will use /dev/mmcblk0p${devplist} instead. devplist is a U-Boot variable that indicates from which partition the extlinux.conf file was read, which turns out to be the partition of our root filesystem. So, your board/stmicroelectronics/stm32mp157-dk/overlay/boot/extlinux/extlinux.conf file should look like this:

label stm32mp15-buildroot
  kernel /boot/zImage
  devicetree /boot/stm32mp157c-dk2.dtb
  append root=/dev/mmcblk0p${devplist} rootwait console=ttySTM0,115200 vt.global_cursor_default=0

For the dual copy strategy to work, we need to tell the bootloader to boot either from the root filesystem in the rootfs1 partition or the rootfs2 partition. This will be done using the bootable flag of each GPT partition, and this is what this script does: it toggles the bootable flag of 4th and 5th partition of the SD card. This way, the partition with the bootable flag will lose it, and the other partition will gain it. Thanks to this, at the next reboot, U-Boot will consider the system located in the other SD card partition. This work will be done by a /etc/swupdate/postupdate.sh script, that you will store in board/stmicroelectronics/stm32mp157-dk/overlay/etc/swupdate/postupdate.sh, which contains:

#!/bin/sh
sgdisk -A 4:toggle:2 -A 5:toggle:2 /dev/mmcblk0
reboot

Make sure this script is executable.

With all these changes in place, let’s restart the Buildroot build by running make. The sdcard.img should contain the new partioning scheme:

$ sgdisk -p output/images/sdcard.img
[...]
Number  Start (sector)    End (sector)  Size       Code  Name
   1              34             497   232.0 KiB   8300  fsbl1
   2             498             961   232.0 KiB   8300  fsbl2
   3             962            2423   731.0 KiB   8300  ssbl
   4            2424          526711   256.0 MiB   8300  rootfs1
   5          526712         1050999   256.0 MiB   8300  rootfs2

Reflash your SD card with the new sdcard.img, and boot this new system. Transfer the buildroot.swu update image to your USB stick.

Testing the firmware update locally

After booting the system, mount the USB stick, which contains the buildroot.swu file:

# mount /dev/sda1 /mnt/
# ls /mnt/
buildroot.swu

Let’s trigger the system upgrade with swupdate:

# swupdate -i /mnt/buildroot.swu -e rootfs,rootfs-2 -p /etc/swupdate/postupdate.sh

Swupdate v2018.11.0

Licensed under GPLv2. See source distribution for detailed copyright notices.

Registered handlers:
	dummy
	raw
	rawfile
software set: rootfs mode: rootfs-2
Software updated successfully
Please reboot the device to start the new software
[INFO ] : SWUPDATE successful ! 
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
# Stopping qt-sensor-demo: OK
Stopping dropbear sshd: OK
Stopping network: OK
Saving random seed... done.
Stopping klogd: OK
Stopping syslogd: OK
umount: devtmpfs busy - remounted read-only
[  761.949576] EXT4-fs (mmcblk0p4): re-mounted. Opts: (null)
The system is going down NOW!
Sent SIGTERM to all processes
Sent SIGKILL to all processes
Requesting system reboot
[  763.965243] reboot: ResNOTICE:  CPU: STM32MP157CAC Rev.B
NOTICE:  Model: STMicroelectronics STM32MP157C-DK2 Discovery Board

The -i option indicates the firmware update file, while the -e option indicates which software component should be updated. Here we update the rootfs in its slot 2, rootfs-2, which is in /dev/mmcblk0p5. The -p option tells to run our post-update script when the update is successful. In the above log, we see that the system is being rebooted right after the update.

At the next boot, you should see:

U-Boot 2018.11-stm32mp-r2.1 (Mar 04 2020 - 15:28:34 +0100)
[...]
mmc0 is current device
Scanning mmc 0:5...
Found /boot/extlinux/extlinux.conf
[...]
append: root=/dev/mmcblk0p5 rootwait console=ttySTM0,115200 vt.global_cursor_default=0

during the U-Boot part. So we see it is loading extlinux.conf from the MMC partition 5, and has properly set root=/dev/mmcblk0p5. So the kernel and Device Tree will be loaded from MMC partition 5, and this partition will also be used by Linux as the root filesystem.

With all this logic, we could now potentially have some script that gets triggered when a USB stick is inserted, mount it, check if an update image is available on the USB stick, and if so, launch swupdate and reboot. This would be perfectly fine for local updates, for example with an operator in charge of doing the update of the device.

However, we can do better, and support over-the-air updates, a topic that we will discuss in the next section.

Over-the-air updates

To support over-the-air updates with swupdate, we will have to:

  1. Install on a server a Web interface that allows the swupdate program to retrieve firmware update files, and the user to trigger the updates.
  2. Run swupdate in daemon mode on the target.

Set up the web server: hawkBit

swupdate is capable of interfacing with a management interface provided by the Eclipse hawkBit project. Using this web interface, one can manage its fleet of embedded devices, and rollout updates to these devices remotely.

hawkBit has plenty of capabilities, and we are here going to set it up in a very minimal way, with no authentication and a very simple configuration.

As suggested in the project getting started page, we’ll use a pre-existing Docker container image to run hawkBit:

sudo docker run -p 8080:8080 hawkbit/hawkbit-update-server:latest \
     --hawkbit.dmf.rabbitmq.enabled=false \
     --hawkbit.server.ddi.security.authentication.anonymous.enabled=true

After a short while, it should show:

2020-03-06 09:15:46.492  ... Started ServerConnector@3728a578{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
2020-03-06 09:15:46.507  ... Jetty started on port(s) 8080 (http/1.1) with context path '/'
2020-03-06 09:15:46.514  ... Started Start in 21.312 seconds (JVM running for 22.108)

From this point, you can connect with your web browser to http://localhost:8080 to access the hawkBit interface. Login with the admin login and admin password.

hawkBit login

Once in the main hawkBit interface, go to the System Config tab, and enable the option Allow targets to download artifacts without security credentials. Of course, for a real deployment, you will want to set up proper credentials and authentification.

hawkBit System Config

In the Distribution tab, create a new Distribution by clicking on the plus sign in the Distributions panel:

hawkBit New Distribution

Then in the same tab, but in the Software Modules panel, create a new software module:

hawkBit New Software Module

Once done, assign the newly added software module to the Buildroot distribution by dragging-drop it into the Buildroot distribution. Things should then look like this:

hawkBit Distribution

Things are now pretty much ready on the hawkBit side now. Let’s move on with the embedded device side.

Configure swupdate

We need to adjust the configuration of swupdate to enable its Suricatta functionality which is what allows to connect to an hawkBit server.

In Buildroot’s menuconfig, enable the libcurl (BR2_PACKAGE_LIBCURL) and json-c (BR2_PACKAGE_JSON_C) packages, both of which are needed for swupdate’s Suricatta. While at it, since we will adjust the swupdate configuration and we’ll want to preserve our custom configuration, change the BR2_PACKAGE_SWUPDATE_CONFIG option to point to board/stmicroelectronics/stm32mp157-dk/swupdate.config.

Then, run:

$ make swupdate-menuconfig

to enter the swupdate configuration interface. Enable the Suricatta option, and inside this menu, in the Server submenu, verify that the Server Type is hawkBit support. You can now exit the swupdate menuconfig.

Save our custom swupdate configuration permanently:

$ make swupdate-update-defconfig

With this proper swupdate configuration in place, we now need to create a runtime configuration file for swupdate, and an init script to start swupdate at boot time. Let’s start with the runtime configuration file, which we’ll store in board/stmicroelectronics/stm32mp157-dk/overlay/etc/swupdate/swupdate.cfg, containing:

globals :
{
	postupdatecmd = "/etc/swupdate/postupdate.sh";
};

suricatta :
{
	tenant = "default";
	id = "DEV001";
	url = "http://192.168.42.1:8080";
};

We specify the path to our post-update script so that it doesn’t have to be specified on the command line, and then we specify the Suricatta configuration details: id is the unique identifier of our board, the URL is the URL to connect to the hawkBit instance (make sure to replace that with the IP address of where you’re running hawkBit). tenant should be default, unless you’re using your hawkBit instance in complex setups to for example serve multiple customers.

Our post-update script also needs to be slightly adjusted. Indeed, we will need a marker that tells us upon reboot that an update has been done, in order to confirm to the server that the update has been successfully applied. So we change board/stmicroelectronics/stm32mp157-dk/overlay/etc/swupdate/postupdate.sh to:

#!/bin/sh

PART_STATUS=$(sgdisk -A 4:get:2 /dev/mmcblk0)
if test "${PART_STATUS}" = "4:2:1" ; then
        NEXT_ROOTFS=/dev/mmcblk0p5
else
        NEXT_ROOTFS=/dev/mmcblk0p4
fi

# Add update marker
mount ${NEXT_ROOTFS} /mnt
touch /mnt/update-ok
umount /mnt

sgdisk -A 4:toggle:2 -A 5:toggle:2 /dev/mmcblk0
reboot

What we do is that we simply mount the next root filesystem, and create a file /update-ok. This file will be checked by our swupdate init script, see below.

Then, our init script will be in board/stmicroelectronics/stm32mp157-dk/overlay/etc/init.d/S98swupdate, with executable permissions, and contain:

#!/bin/sh

DAEMON="swupdate"
PIDFILE="/var/run/$DAEMON.pid"

PART_STATUS=$(sgdisk -A 4:get:2 /dev/mmcblk0)
if test "${PART_STATUS}" = "4:2:1" ; then
	ROOTFS=rootfs-2
else
	ROOTFS=rootfs-1
fi

if test -f /update-ok ; then
	SURICATTA_ARGS="-c 2"
	rm -f /update-ok
fi

start() {
	printf 'Starting %s: ' "$DAEMON"
	# shellcheck disable=SC2086 # we need the word splitting
	start-stop-daemon -b -q -m -S -p "$PIDFILE" -x "/usr/bin/$DAEMON" \
		-- -f /etc/swupdate/swupdate.cfg -L -e rootfs,${ROOTFS} -u "${SURICATTA_ARGS}"
	status=$?
	if [ "$status" -eq 0 ]; then
		echo "OK"
	else
		echo "FAIL"
	fi
	return "$status"
}

stop() {
	printf 'Stopping %s: ' "$DAEMON"
	start-stop-daemon -K -q -p "$PIDFILE"
	status=$?
	if [ "$status" -eq 0 ]; then
		rm -f "$PIDFILE"
		echo "OK"
	else
		echo "FAIL"
	fi
	return "$status"
}

restart() {
	stop
	sleep 1
	start
}

case "$1" in
        start|stop|restart)
		"$1";;
	reload)
		# Restart, since there is no true "reload" feature.
		restart;;
        *)
                echo "Usage: $0 {start|stop|restart|reload}"
                exit 1
esac

This is modeled after typical Buildroot init scripts. A few points worth mentioning:

  • At the beginning of the script, we determine which copy of the root filesystem needs to be updated by looking at which partition currently is marked “bootable”. This is used to fill in the ROOTFS variable.
  • We also determine if we are just finishing an update, by looking at the presence of a /update-ok file.
  • When starting swupdate, we pass a few options: -f with the path to the swupdate configuration file, -L to enable syslog logging, -e to indicate which copy of the root filesystem should be updated, and -u '${SURICATTA_ARGS}' to run in Suricatta mode, with SURICATTA_ARGS containing -c 2 to confirm the completion of an update.

Generate a new image with the updated swupdate, its configuration file and init script, and reboot your system.

Deploying an update

When booting, your system starts swupdate automatically:

Starting swupdate: OK
[...]
# ps aux | grep swupdate
  125 root     /usr/bin/swupdate -f /etc/swupdate/swupdate.cfg -L -e rootfs,rootfs-1 -u
  132 root     /usr/bin/swupdate -f /etc/swupdate/swupdate.cfg -L -e rootfs,rootfs-1 -u

Back to the hawkBit administration interface, the Deployment tab should show one notification:

hawkBit new device notification

and when clicking on it, you should see our DEV001 device:

hawkBit new device

Now, go to the Upload tab, select the Buildroot software module, and click on Upload File. Upload the buildroot.swu file here:

hawkBit Upload

Back into the Deployment tab, drag and drop the Buildroot distribution into the DEV001 device. A pending update should appear in the Action history for DEV001:

hawkBit upgrade pending

The swupdate on your target will poll regularly the server (by default every 300 seconds, can be customized in the System config tab of the hawkBit interface) to know if an update is available. When that happens, the update will be downloaded and applied, the system will reboot, and at the next boot the update will be confirmed as successful, showing this status in the hawkBit interface:

hawkBit upgrade confirmed

If you’ve reached this step, your system has been successfully updated, congratulations! Of course, there are many more things to do to get a proper swupdate/hawkBit deployment: assign unique device IDs (for example based on MAC addresses or SoC serial number), implement proper authentication between the swupdate client and the server, implement image encryption if necessary, improve the upgrade validation mechanism to make sure it detects if the new image doesn’t boot properly, etc.

Conclusion

In this blog post, we have learned about firmware upgrade solutions, and specifically about swupdate. We’ve seen how to set up swupdate in the context of Buildroot, first for local updates, and then for remote updates using the hawkBit management interface. Hopefully this will be useful for your future embedded projects!

As usual, the complete Buildroot code to reproduce the same setup is available in our branch 2019.02/stm32mp157-dk-blog-7, in two commits: one for the first step implementing support just for local updates, and another one for remote update support.