Process scheduling is obviously important on multi-user systems. It's also important on systems we use as individuals. For example, contrast the startup of my Mac OS X box and my wife's XP system. Both have certain programs set to start up at login, but they obviously handle the scheduling much differently. On the Mac, I can click into any process that has started and can use it while other programs are still starting. On XP, I can't: the other programs will command the CPU's attention until everything is running.
Process scheduling is an extremely complex subject. Algorithms can favor one type of process over another - a batch system would use a different method than one designed for interactive use, a real-time system necessarily has to work much differently than a system that doesn't need real-time capability. This article barely scratches the surface of all that, but will attempt to give an overview of scheduling.
On a typical OS, a process runs until it either finishes, waits for I/O or some other event (sleeping for a specific period, for example), or is interrupted by some external event. If nothing else, a timer has been set (by the kernel) giving the process a maximum time slice, and an interrupt will occur when that time slice expires (the concept of a timer is what differentiates a preemptive multi-tasking system from a "cooperative" system like Mac OS 9 or Windows 98 where a runaway process can hang the entire system).
When one of these things occurs, the kernel scheduler selects a new process to run. Exactly how it determines which process will get cpu time is where all the complexity comes in. Obviously the first thing to look at is whether the process is ready to run - if it is still voluntarily sleeping or waiting for some I/O event that hasn't completed, there is no point in scheduling it. But given two or more processes that are ready to run, how does the scheduler decide which is next?
On all Unix-like systems, including Linux, every process has a priority that varies from -20 to +20. Lower numbers mean higher priority. This is also called the "nice" value because the "nice" command (and "renice" on some systems) sets the priority. If you want to decrease the priority of the Linux process "myjob" with PID 43211, you could:
renice +20 -p 43211
Or, you could have started "myjob" with low priority to begin with:
nice -n +20 myjob &
(there are related system calls that let a process set its own priority)
On Windows XP, the Processes tab in the Task Manager (Ctrl-Alt-Del) lists processes. Right click on any process, and you can change its priority. You can start a Windows program at a specific priority by using the START command in a batch file - see "start ?" at a command prompt for details.
Priority is seldom the only factor an OS uses for scheduling, however. Much more complexity goes into deciding which process will run next. How the scheduler makes these decisions is often referred to as the scheduling policy For example, interactive processes (you at the keyboard) are usually given more attention than background processes. Within otherwise equal choices, a process that didn't use up its entire time slice the last time it ran will have a better chance of running sooner than one that did. The scheduler may also set the time slice for a process that it does decide to run, thereby affecting the maximum time it gets to use the cpu, and therefore being more (or less) fair to other processes still waiting to run.
Some operating systems only let you adjust priority, while others will let you adjust much more of the scheduler's activities. The dispadmin command in Solaris and Unixware is a good example of that type of flexibility. Modern Linux versions have the same capability.
Got something to add? Send me email.
More Articles by Tony Lawrence © 2010-10-29 Tony Lawrence
UNIX is simple. It just takes a genius to understand its simplicity. (Dennis Ritchie)
---August 1, 2004
I remember that with the introduction of kernel 2.6 (and some redhat backports to 2.4) the scedualling had the option of preemptive. This was very nice because it makes the desktop much more responsive, makes you feel like you have a much faster computer then you did before. Applications and proccesses, if you average them out, probably don't get done any faster.
Now I can do stuff like renice a mpeg4 movie being encoded, something that can take several hours of 100% cpu usage, and have it have very little noticable impact on my desktop.
Windows has enjoyed this for a long time, which led to many people thinking that Linux desktop was quite a bit more bloated then MS's. Simply because a lot of the snappiness wasn't there...
Not so good to run preemptive kernel in a server, though. I'd turn that off in the kernel options when you recompile and it will use a different shceduling setup.
One curious thing for Linux nowadays is the Bossa project.
It's basicly a custom language for creating new OS scedualing setups. A OS developer can now create custom scheduling setups with this language/framework without having to have the knowledge of a full-blown kernel developer. It gets translated to C which gets compiled with the rest of the kernel.
Overview and several sample policies can be found at:
https://www.emn.fr/x-info/bossa/
--Drag
---August 1, 2004
I think you are referring to? kernel preemption - allowing a kernel process to be preempted. Supposedly Linus has been opposed to kernel preemption, which is why it wasn't in earlier releases. It is more difficult to have a preemptive kernel - deadlocks are easier, etc.
Some links:
https://kerneltrap.org/node/view/3440
https://www.linuxjournal.com/article.php?sid=5600
User preemption would never be turned off - we'd be back in Win 98 :-)
--TonyLawrence
"Not so good to run preemptive kernel in a server..."
Depends on what it is that is being preemptive. Much of what done in the kernel is not real time-sensitive, so delaying execution a few milliseconds here and there usually won't matter. However, if the current task is I/O with a device producing a sustained data stream (like a SCSI device operating in synchronous mode -- the default nowadays), you wouldn't want to preempt that too often. Otherwise, bytes might suddenly start disappearing because the rate of processing in real time isn't as fast as the external device.
As Tony eluded, a preemptive kernel can trip over deadlock issues (back when I first got acquainted with UNIX we called that a Mexican standoff -- not so PC nowadays) and grind to a halt. This would typically occur in some kind of I/O bound scenario, or in a shared memory access situation.
Fortunately, the kernel typically gives much higher run priority to I/O bound processes than to kernel activity where I/O is not the main event. The theory behind this has to do with the fact that there could be many processes competing for access to a particular piece of hardware, such as the hard drive. Therefore, you would want to preempt another task running in kernel mode that is not asking for hardware access, so as to complete the I/O bound process ASAP.
In looking at the trade-offs that exist within preemptive multitasking, it becomes quite apparent that a workstation needs very different scheduling than a server. In general, a server should be configured to favor I/O bound processing as much as possible. In fact, if a large number of users (not processes) need access to the server, reducing the maximum process time slice may produce better overall performance -- up to a point (in SCO, see the MAXSLICE tunable kernel parameter -- its default is 100 * 10 milliseconds, 10 ms being the synchronous interrupt rate that affects scheduling). Of course, if the maximum time slice is set too short, kernel overhead will increase due to the more frequent context changes that will occur when a running process is preempted.
A workstation, on the other hand, would run better with scheduling favoring compute bound processes, which would make the apparent realtime performance better. There's just no one setup that can work well in all cases.
Then there's Windows... <Grin>
--BigDumbDinosaur
---August 1, 2004
The "good" side of this is that a lot of the work necessary for a good SMP kernel is the same sort of things you need to do for a preemptive kernel.
--TonyLawrence
---August 1, 2004
BTW, speaking of reentrant: these Wiki comments are NOT. Nor are they protected by locking. If two or more people happen to be editing comments on the same page at the same time, the last person to hit "Save" wins.
I've never felt it was important enough to redo - if you are really saying something important, you should use the https://aplawrence.com/cgi-bin/auth.pl Posting tool.
I suppose someday when I have some spare time I should fix this..
--TonyLawrence
Ah so! Just as I suspected. No user conflict resolution on this site. <Grin>
Is the Wiki code written in Perl?
--BigDumbDinosaur
---August 1, 2004
It is, but the problem is the very nature of Wiki's.
Since any user CAN remove or edit any previous content, how can you easily determine that part of the comments weren't removed on purpose? That requires looking at what you had when you started editing vs. what the state is now vs. what you are about to submit. It's tricky - all of that information is easily available, but what to do with it is the problem.
Suppose you were adding a comment while I am typing this. You remove a large section of your comments that I am replying to. When I hit save, the code can of course see the text I added and what I originally saw, and the current state with your subtractions - but what should it do? If
A is the original state when I started
B is the state after my edits
C is the current state from various edits that completed after I started editing
What should the new state be? Diff of A and B added to C?
Maybe, but maybe not.
No easy answers..
--TonyLawrence
---August 3, 2004
Easy answer: tell people to write their long stuff out on a nice editor, then copy and paste it to the wiki.
Idea: When people hit the edit comment button have it put a time stamp on a cookie or something in their browser or hidden in the comment. Something somewere that indicates what time they hit the add comment button.
So when they hit the "save" button the wiki will check to see if the wiki entries have been changed since the user started adding their comments. If the wiki comments haven't changed since they started editing, then you accept the changes without question. However if the Wiki has been edited since then, then you get a error, a warning or do some special formatting magic or something like that.
Just a idea.
--Drag
---August 3, 2004
That's what I'll do "when I get around to it", but it still doesn't answer all concerns..
--TonyLawrence
Sat Jan 7 09:34:49 2006: 1483 saadia
Hi all. Do u have any idea about installing the bossa framework on linux
------------------------
Printer Friendly Version
Process scheduling Copyright © August 2004 Tony Lawrence
Have you tried Searching this site?
This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.
Contact us
Printer Friendly Version