Notice: Undefined index: order_next_posts in /nas/content/live/gadgetmag/wp-content/plugins/smart-scroll-posts/smart-scroll-posts.php on line 194

Notice: Undefined index: post_link_target in /nas/content/live/gadgetmag/wp-content/plugins/smart-scroll-posts/smart-scroll-posts.php on line 195

Notice: Undefined index: posts_featured_size in /nas/content/live/gadgetmag/wp-content/plugins/smart-scroll-posts/smart-scroll-posts.php on line 196
News

James Bottomley speaks

The Novell software engineer, Linux Foundation board member and kernel maintainer talks technical about Linux and more…

James Bottomley 01
Just who are you?
James Bottomley, and I’m a distinguished engineer at Novell.
My primary role for the Linux kernel is to be SCSI subsystem maintainer, which means I have to run a Git tree for Linux and manage a mailing-list-based community (SCSI is also a fairly enterprise-oriented community, so if there’s tension between desktop needs and enterprise needs, it tends to be the flash point). I’m also one of the maintainers of PA-RISC (HP’s old RISC system) in the kernel, and I’ve written and maintained a few SCSI drivers.

Then, as a sideline, I do other stuff like the NCR voyager port (a Micro Channel non-APIC-based SMP system, the last production year for them being 1999). At Plumbers last year, Qualcomm gave me an Android developer phone, so I’ve also been hacking on that, although mainly user-space stuff to make it work the way I want it to.
I’m also quite heavily involved on the ‘governance’ side of Linux (if our system of management can be called that). I’m chair of the Linux Foundation technical advisory board (its main job is to act as a resource for the Linux Foundation and its members on matters of open source, and also to raise issues we have which we’d like the LF to help with). As part of that job, I’m also on the board of the Linux Foundation itself.

What are the exciting changes for .33?
I think the most exciting change is that after years of arguing, the DRBD (Distributed Replicating Block Device) finally got accepted into the mainline. It’s the foundation of a lot of high availability and disaster recovery solutions (including our own SLE HA extensions).

The next new piece of technology is I/O bandwidth controllers. There had been about two competing ideas about how this should be done, and it took two rounds of discussion at the Linux file system and storage summit and finally a troubleshooting session at the kernel summit to get final agreement. (I/O controllers allow for much better fine-grained control over how I/O bandwidth is allocated to virtual machines.)

Finally, the ftrace subsystem acquired dynamic tracing in 2.6.33. That’s a step to taking it a lot closer to rivalling the functionality of Sun’s dtrace.
I’m afraid these three are all pretty much enterprise features. The most visible change for users is probably the merging of the Noveau driver, which is a reverse-engineered driver for the Nvidia graphics chipset. Hopefully it will finally allow us to move towards ending the pain a significant number of desktop users have with the Nvidia binary graphics driver.

Was there some sort of kerfuffle about the Completely Fair Scheduler slowing down certain video processes?
I assume you’re referring to the Kasper Sandberg benchmarks?

I wasn’t aware of that specific one, but I am aware that we’ve been getting a lot of latency problems with the CFS scheduler, which seem to be at the root of the benchmark problems the graph shows. Arjan van de Ven has been doing the most work on this with his LatencyTOP tool (the performance issues show up on the desktop, which is why Intel is concerned). LatencyTOP found some significant bugs in the CFS implementation on .32, which have hopefully been fixed in .33 …although there will surely be more to uncover and fix.
The process scheduler is about the most complex heuristic system we have in the kernel, so completely rewriting it was bound to have a few teething troubles (although it’s fair to say that the scheduler has always been a sore spot with a variety of people). It’s probably going to take another few kernel releases to get the rough edges smoothed off. The problem with this type of smoothing is that although the scheduler people try to test with a variety of workloads, they can’t match the diversity of real-life situations. So scheduler tuning ends up being a very public affair because you have to do it iteratively as the results come in.

It becomes even more fraught because when you have an optimally implemented system (as in no bugs sucking performance), the tuning is a zero sum game – improve the performance under workload X and you can end up penalising workload Y – so not everyone agrees where the optimal point in the tuning space lies. However, I know the scheduler people quite well, and I’m confident they have the ability and the tenacity and the commitment to give us as close to the best possible tuning as is humanly possible.

×