Day 2 of the Embedded Linux Conference started with a keynote titled The Internet of Things, given by Mike Anderson. With such a title, one could have feared some kind of very fuzzy-marketing-style kind of keynote, but with Mike Anderson as speaker, it clearly couldn’t be the case. Mike is well-known at ELC and ELCE for all its highly technical presentation on kernel debugging, JTAG, OpenOCD and more. This keynote was not really related to embedded Linux directly, but about all the potential applications that modern technologies such as RFID, nano-robots, wireless communications have. As Mike pointed out, there are lots of potential opportunities to optimize energy usage, make our lives easier, but there are also lots of dangers (surveillance, manipulation of information, reduction of private life, etc.).
Right after Mike’s keynote, it was the time for me to give the presentation Buildroot: A Nice, Simple, and Efficient Embedded Linux Build System. As a presenter, I am obviously not objective, but I think the presentation went well. I filled the entire time slot, leaving the time for about five questions at the end. Around 60-70 people were in the room, quite a good number considering the fact that there was a talk from the excellent Steven Rostedt in another room at the same time. I will put the slides of this presentation on line very soon, which was a general presentation of Buildroot, trying to emphasize all the cleanups and quality improvements we have done since the last three years, and also trying to highlight the fact that Buildroot is really easy to understand, it is not a magic black box contrary to some other embedded Linux build systems. That’s the reason why I gave some details about how our package infrastructure works internally, to show that it is really simple. There were several questions about why we do not support binary packages, and of course I replied that it was a design decision in order to remain simple. At the end of the presentation, a guy from Mentor Graphics came to tell me that saying no was an excellent thing and that too many projects fail to say no to new features, and therefore they get more and more complicated.
At the same time as my Buildroot’s talk, Steven Rostedt from RedHat was presenting Automated Testing with ktest.pl (Embedded Edition) and Grégory attended this conference. Grégory reports: “As indicated in the title it is the “embedded” version of a former conference. I don’t know if Steven is really new in the embedded field or if he just pretends to, but the result is that for a newcomer in embedded Linux, this talk is really well detailed. He shows how to setup the board step by step, showing the problems you usually have. But the real topic is the ktest.pl script and how to use it. After two hours of presentation I was totally convinced by the usefulness of this script. It will help a lot to automate the tasks we usually do by hand such as git bisect, check that the stack of patches we have don’t break anything, check that we don’t have any regression at runtime or just at build. All these tasks can be done with ktest.pl and in a very simple way!”
Then, I went to Tim Bird’s talk about Embedded-Appropriate Crash Handling in Linux. The initial problem that Tim wanted to solve is how to get and store information about applications that have crashed on devices in the-field. The major issue is that to debug and understand the crash you theoretically need to keep a lot of information, but in practice you cannot do this due to space constraints. Typically, a way of doing post-mortem analysis of a crashed application is to use the core file that the kernel generates after the crash, and use it with gdb. Unfortunately, a core file is typically very large. Tim looked at the crash report mechanism of Android, and discovered that it was directly registering a handler for the SIGSEGV signal (and other related signals indicating an application crash) into the dynamic library loader in Bionic. This signal handler communicates with a daemon called debuggerd over a socket, and this daemon then uses ptrace to get details about the state of the application at the moment of the crash (register values, stack contents, etc.). Tim didn’t want to require modifications at the application level or at the dynamic library loader, so instead he used the core pattern mechanism provided by the Linux kernel: by writing to some file in
/proc, you can tell the kernel to start a userspace program when an application crashes, and the kernel dumps the core file contents as the standard input of this new process. Based on debuggerd, Tim implemented such a program that also uses ptrace and
/proc to get details about the crashed application. Tim also discussed the various ways of getting a backtrace: using the frame pointer (but this is often not available, as many people use the
-fomit-frame-pointer compiler option), using the unwind tables, using a best-guess method (you just go through the stack, and everything that looks like a valid function address is assumed to be part of the call stack, so this method shows some false positive) or using some kind of ARM emulation (but I don’t recall the name of this solution at the moment). All in all, Tim’s talk was great, a good report of its experiment and good technical information about this topic.
Everybody at Bootlin wanted to attend to the “ARM Subarchitecture Status” presentation given by Arnd Bergmann, but we couldn’t since we were responsible for recording videos of all talks. This time, it’s Grégory who had the privilege of attending what looked like the most interesting talk of the slot. In fact as we follow the ARM Linux community in a close way through the mailing lists or the LWN.net website, nothing was really new for Grégory in Arnd’s presentation. Nevertheless it was good to take the time to have a status. The interesting part for Grégory was to see how Arnd works with all the git trees coming from SoC vendors or from community and how he merges them together and merges the conflicts. It is more manual than we imagined and honestly is certainly a very hard job to do.
Later in the day, I went to David Anders talk about Board Bringup: LCD and Display Interfaces and it was really a great talk. David explained very well the hardware signals between the LCD controller that you have in your SoC and the LCD panel you’re using, and how those signals affect the timing configuration that you have to set in your kernel code. He clearly explained things like pixel clock, vertical and horizontal sync, but also more complex things like the front porch and the back porch. He then went on to describe LVDS, which in fact is a serial protocol that uses two wires per-color in a differential mode to transmit the picture contents, and also talked about EDID, which is basically an I2C bus that can be used to read from the display device what display modes are available and what their timings are. He also described some of the test methods he used, from a logic analyzer up to a program called fb-test. David’s talk was really great because it provided the kind of hardware details that a low-level software engineer needs to understand, and David explained them in a way that can be understood by a software engineer. Following the talk, I met David and asked some more questions and he was very nice to answer them, in a very clear way. David slides are available at http://elinux.org/Elc-lcd, and you can also check out other things that David is working on at TinCanTools, such as the very nice Flyswatter JTAG debugger for ARM.
At the end of day, Grégory attended the Real-Time discussion session, Maxime attended the Yocto Project discussion session and I attended the Common Clock Framework discussion session. This last discussion session was about work done to consolidate the multiple implementations of the clock APIs that exist in the kernel: at the moment, each ARM sub-architecture re-implements its own clock framework and the goal is to have a common clock framework in
drivers/clk/ that can be shared by all ARM sub-architectures but also potentially by other architectures as well. The discussion lead by Mike Turquette from Texas Instruments/Linaro showed that a great deal of work has already been done, but a lot of questions remained opened. Each ARM sub-architecture has different constraints, and finding the right solution that solves the constraints of everybody isn’t easy.
And finally, there was the usual Technical Showcase, with demonstrations of the Pandaboard, but also the newer BeagleBone platform which looks really exciting. David Anders was demonstrating his LCD bring-up setup, another person was demonstrating an open-source GSM access point based on USRP, etc. Lots of interesting things to see, lots of nice people to discuss with.