Notice: Undefined index: order_next_posts in /nas/content/live/gadgetmag/wp-content/plugins/smart-scroll-posts/smart-scroll-posts.php on line 194

Notice: Undefined index: post_link_target in /nas/content/live/gadgetmag/wp-content/plugins/smart-scroll-posts/smart-scroll-posts.php on line 195

Notice: Undefined index: posts_featured_size in /nas/content/live/gadgetmag/wp-content/plugins/smart-scroll-posts/smart-scroll-posts.php on line 196

Working on 3.19 – the kernel column

Jon Masters takes you through a summary of the features inside Linux 3.18 and the ongoing work for the next 3.19 series kernel

Linus Torvalds announced the release of Linux kernel version 3.18 in time for the holidays. In his mail, Linus noted that the previous RC, release candidate 7, had been “tiny” (in terms of changes and bugfixes), so it was time to get the final release out. The latest kernel includes support for storing AMD Radeon GPU buffers in regular application memory (building upon similar work done by Intel for kernel 3.16), and overlayfs (which we have covered previously), amongst a number of other less interesting new features. A full summary is provided at Kernel Newbies.

GPU buffers in userspace

Over the past few years, microprocessors have evolved into complex System-on-Chip designs in which processor cores are combined with other logic onto a single chip. We’ve seen the demise of the conventional PC “North Bridge” as it was replaced by on-chip functions. We are seeing the demise of South Bridges and Platform Controller Hubs (PCHs). We have also seen the integration of such features as the GPU (Graphical Processing Unit) on-chip. This has been true in the mobile space for many years and is becoming true in the case of regular PCs, as the graphics move from being a part of the external chipset, to being integrated onto the same SoC as the microprocessor cores themselves.

All of this integration brings with it many benefits, including very high bandwidth access to main memory and the ability for a truly UMA (Uniform Memory Architecture) between the microprocessors and graphics processors. In other words, the latency (time to access) and speed of access to memory is uniform for memory that can be shared between the CPU and GPU. We have already had designs for many years now in which the GPU uses regular system RAM rather than its own special memory, but those designs typically rely on the GPU having a reserved region of system RAM that is dedicated to graphics. Data is often copied to/from memory owned by the CPU or the GPU, leading to extraneous copies and potentially unnecessary overhead. Enter userptrs.

Daniel Vetter posted patches (back in 3.16) adding support for “userptrs”, which is a way of describing pointers to userspace (application) memory regions of paged (virtually addressable) memory that can be shared between the CPU and the GPU. This allows developers to “exploit UMA… in order to utilise normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset)”. As Daniel noted in his original patches, this brings many potential benefits, including “zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient”. Linux 3.18 equalises the playing field between Intel- and AMD-based PCs with support added by Christian Konig for AMD Radeon, building upon the userptr infrastructure that previously supported only Intel i915 graphics.

Lockups in 3.18

Linus wound up releasing 3.18 in spite of the existence of a known lockup issue affecting a few developers. This issue was first reported by Dave Jones of Fedora kernel fame – he was the original Fedora kernel maintainer and is still very much involved in that and many other communities. It was then confirmed by a number of others, including Jurgen Gross. Originally, the lockups were happening in 3.18-rc4, but it was later realised that all kernels since 3.16 exhibited this problem on the affected machines eventually (just after potentially many days). Dave began the process of a “git bisect”, which is a painful process in which one builds a test kernel, boots it, records the good/bad result and repeats, following a binary search process between chunks of kernel patches to find an offending single bad kernel change.

The “lockup” in this case manifests as an NMI watchdog warning with a backtrace printed by the kernel. The NMI watchdog is a hardware feature on modern microprocessors in which the kernel arranges for a periodic “Non-Maskable Interrupt” to be generated by special hardware, which causes a special NMI “watchdog” routine to run that checks to see whether the kernel is making progress. If the kernel is stuck in some code path and not making progress, the system will appear to be “hung” or “locked up” because it will not be servicing user applications or performing system maintenance operations needed for normal system activity. The mouse will appear frozen and the system will appear unresponsive. On the console, the NMI watchdog prints a backtrace showing exactly what parts of the kernel were running, and where it was stuck.

Unfortunately, the particular backtraces Dave was seeing were complex. They were generated by Dave’s “trinity” code fuzzing tool, which generates random calls into the kernel precisely to trigger such hangs and other nasties from bad and buggy kernel code, and usually manifested as a single core being stuck waiting for an IPI (Inter- Processor-Interrupt) that never comes. After a while, all of the cores in the machine become “wedged”, as Thomas Gleixner described it. The problem is not fully resolved as of writing, but it is believed by Linus to be a bug in the kernel’s hr_timer (High Resolution Timer) code causing an infinite loop to be entered by one processor under a rare and bizzare situation in which the proessor’s built-in low resolution TSC (Time Stamp Counter) – known to be non-monotonic, or in other words not incrementing reliably, under certain circumstances – checks out as reliable, but the HPET (High Precision Event Timer) is actually not fully monotonic.

The problem is still under investigation by some of the leading developers because it is both a rare bug, taking days to reliably test fixes, and affects only certain hardware. Thus far, many guesses and hypotheses have been tested, and the latest suggestions derived are based upon Linus’ intuition from code inspection after asking “What could theoretically trigger this scenario?”. It will be interesting to learn what the actual cause was. In the meantime, the vast majority of users are unaffected by this rare bug and have a shiny new Linux 3.18 kernel to play with over the holidays.

Ongoing development

Alan Tull (Altera) posted a patch series entitled “FPGA Manager Framework” that aims to provide a generic, abstracted mechanism for the management and configuration of FPGA (Field Programmable Gate Array) programmable logic devices that are increasingly a part of modern system designs. FPGAs allow for field upgrades of certain hardware logic designs and are often used in high value, complex, fast-to-market solutions. A generic framework is always the way to go when faced with a zoo of different vendor options.

Joe Perches began an interesting conversation entitled “Side-effect free printk” in which he proposed a cleanup to those cases throughout the kernel where code will occasionally provide a printk-driving macro that outputs diagnostic information while also performing some other (potentially unnoticed) side effect, such as modifying the system global state. The suggestion was that Coccinelle, which is a tool capable of performing semantic analysis of code to determine what it really means to be doing, could be modified to catch such cases. The discussion even ended in a proposal that “printk” go away and be replaced in the kernel one day with a standard “printf”.

Jike Song (Intel) announced the latest version of “XenGT”, Intel’s “Mediated Graphics Passthrough Solution”. It enables hardware-optimised/assisted virtual GPU instances to be provided to virtualised guest machines. Currently, support is only available for the Xen Hypervisor, and is presumably something that will be used by Amazon EC2.

Finally this month, Sash Levin posted a patch enabling support for the in-development GCC5 (GNU Compiler Collection 5) toolchain. It feels like only yesterday that we were building kernels with GCC2.95, but that turns out to be more than a decade ago. Time flies when you’re having fun in Linux land – don’t miss our 2014 Kernel Year in Review in next month’s issue!