APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Is there always a better way?

© September 2007 Anthony Lawrence


I happened upon a recent Linux-Watch article about Dell's Dynamic Kernel Module Support. I actually downloaded Dell's PDF paper but the LW article is an excellent overview, as is Linux Journal's article on the same subject..

Around the same time I noticed this article about the stable userspace driver API, which sent me around the net looking for current thinking on the whole binary vs. open source driver debate. I remembered reading Linux in a binary world... a doomsday scenario and the even older The Linux Kernel Driver Interface and was interested to revisit them. And as long as we are traveling down memory lane, let's not forget Linus Torvald's ancient words: (from https://lwn.net/1999/0211/a/lt-binary.html):

Basically, I want people to know that when they use binary-only modules, it's THEIR problem. I want people to know that in their bones, and I want it shouted out from the rooftops. I want people to wake up in a cold sweat every once in a while if they use binary-only modules.

Why? Because I'm a prick, and I want people to suffer? No.

Because I _know_ that I will eventually make changes that break modules. And I want people to expect them, and I never EVER want to see an email in my mailbox that says "Damn you, Linus, I used this binary module for over two years, and it worked perfectly across 150 kernel releases, and Linux-5.6.71 broke it, and you had better fix your kernel".

But will that always be true? There's an unstated assumption in those words that change will always be necessary. But is it?

What causes kernel change? One part is certainly changing hardware, and the other is clever programmers figuring out a better way to do things. Well, sooner or later improvements in both areas settle down. Moore's law reaches the point of no further progress possible, and generations of clever programmers wring the last ounce of performance out of hardware that can't change because the laws of physics won't let it. No, of course we aren't there yet, but that day has to come, doesn't it? Sooner or later stability comes because the envelope just can't be pushed any farther.

Well, there'll be plenty of flame wars to come before that happens, right?

Got something to add? Send me email.

(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> Is there always a better way?

1 comment

Inexpensive and informative Apple related e-books:

Digital Sharing Crash Course

Take Control of Preview

Take control of Apple TV, Second Edition

Take Control of Parallels Desktop 12

Photos for Mac: A Take Control Crash Course

More Articles by © Anthony Lawrence

Sun Sep 30 09:24:32 2007: 3172   drag

Well right now a lot of what causes kernel to change is increased performance, functionality, and ease of use.

One big example I like to point out is power management.

Up until _very_ recently if you wanted to knock a good hour and a half of battery life off of your new x86 laptop a perfect way to accomplish this was to install Linux over Windows XP. It was also a great way to get rid of any sleep or suspend capabilities, increase the operating tempurature, and noise levels. Linux effectively reduced the power management capabilities of a sophisticated ACPI-enabled laptop to that of a mobile PC with a UPS.

Now Linux has very close to 'just working' suspend-to-disk and suspend-to-ram support. On my laptop I frequently use suspend-to-disk. I just discovered a setting for a hybrid suspend so that I can suspend to ram, but have the memory contents saved to disk.. just incase I loose power completely. I don't have to worry about it much anymore.

Then Linux has dynamic tickless kernels, much lower latencies, more efficient scedualer, transparent smp support, and all sorts of desktop-related items.

Tickless kernels have allowed the creation of things like Intel's PowerTOP application.. It's a top-like application that reports on the relative power consumption of different userland applications, based on how often they wake the proccessor from it's S3 state (lowest energy usage possible while still running; most the cpu is shutdown). This has given end users a easy way to find bugs and bad polling behavior in many applications, most of which was very quickly fixed. These improvements have literally added hours to many people's mobile devices.

A lot of this, especially suspend, has touched on many many Linux subsystems and has required patches and changes to many drivers.

And work is still going on.. if your hardware is well supported in Linux you can use suspend most of the time. Sometimes the sound may flake out or the module for your wifi would have to be reloaded, but this is usually easily taken of using scripts. Not perfect, not userfriendly, but it mostly works. Much better then it did a year ago.

In another year or so they should have that worked out.. thanks to things like rewriting drivers to support the more capable mac80211 protocol stack, rather then individual drivers using their own magic mojo to control the wifi.

Then they'll get it working, and have right now (for some hardware combos) reliable suspend that will work a hundred times between reboots. That's relatively easy.

But how about hardware that reliably suspends thousands of times between reboots? How about hardware that suspends a thousand times in a day? We will start seeing hardware that will suspend in between long keystrokes and just keep the screen refreshed with some sort of feedback loop to give the illusion that our devices are not passing out completely and continiously due to boredom.

This stuff is a continious evolution for Linux. Every release they are trying new ways and trying different improvements.. Linux can't wait 4-5 years in here between major releases in order for old hardware and drivers to go obsolete so that it add dramatic improvements like Microsoft can with Windows. Nor can it dictate hardware design and pick and choose a select amount of hardware and specific combinations of hardware that is supported like Apple does with OS X.

So something has to give. And the Linux devs choose to toss a stable driver ABI/API out the window. Oh well.

Things like Dell's DKMS can go a long way to reducing the negative effects of this choice, but those effects are still there and they are always going to have a negative impact on user's experiences.

But could you imagine, in Linux, trying to design a laptop that suspends a dozen times a day without user intervention when your SATA driver is closed source and was developed for the 2.6.2 kernel when none of the current power management support stuff even existing in Linux? Not good.

Maybe in the future hardware design will get 'good enough' and people will stop making these sorts of changes and improvements and concentrate on more productive things. But I don't think it's going to happen in any time in next 30 years or so.


Printer Friendly Version

Have you tried Searching this site?

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us

Printer Friendly Version

Let us try to teach generosity and altruism, because we are born selfish. (Richard Dawkins)

Linux posts

Troubleshooting posts

This post tagged:




Unix/Linux Consultants

Skills Tests

Unix/Linux Book Reviews

My Unix/Linux Troubleshooting Book

This site runs on Linode