Blog 9

The topic that my group and I are doing is very is based on mass flow rates and can be very useful as well as applicable to many situations. However, I don’t particularly enjoy working with mass flow…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Tasks Scheduling in OS

When there are multiple things to do, how do you choose which one to do
first? At any point in time, some threads are running on the processor. Others are waiting their turn for the processor. Still, other threads are blocked waiting for I/O to complete, a condition variable to be signaled When there
is more than one runnable thread, the processor scheduling policy determines which thread to run next . You might think the answer to this question is easy: do the work in the order it arrives. After all, that seems to be the only fair thing to do. Because it is obviously fair, but that's not right! Server operating systems in particular are often overloaded. Parallel applications can create more work than processors, and if care isn’t taken in the design of the scheduling policy, performance can badly degrade so we will need to find a suitable algorithm to improve the scheduling.

different types of schedulers :

Perhaps the simplest scheduling algorithm possible is first-in-first-out (FIFO):
do each task in the order in which it arrives. When we start working on a task, we keep running it until it is done. FIFO minimizes overhead, switching between tasks when each one completes. Because it minimizes overhead, if we have a fixed number of tasks, and those tasks only need the processor, FIFO will have the best throughput: it will complete the most tasks the most quickly. And as we mentioned, FIFO appears to be the definition of fairness every task patiently waits its turn. Unfortunately, FIFO has a weakness. If a task with very little work to do happens to land in line behind a task that takes a very long time, then the system will seem very inefficient If the first task in the queue takes one second, and the next five arrive an instant later, but each only needs a millisecond of the processor, then they will all need to wait until the first one finishes. The average response time will be over a second, but the optimal average response time is much less than that. In fact, if we ignore switching overhead, there are some workloads where FIFO is literally the worst possible policy for average response time.

If FIFO can be a poor choice for average response time, is there an optimal
policy for minimizing average response time? The answer is yes: schedule the
the shortest job first. Suppose we could know how much time each task needed at the processor. (In general, we won’t know. If we always schedule the task that has the least remaining work to do, that will minimize the average response time. To see that SJF is optimal, consider a hypothetical alternative policy that is not SJF, but that we think might be optimal. At some point this alternative will run a task that is longer than something else in the queue; after all, it is not SJF! If we now switch the order of tasks, keeping everything the same, but doing the shorter task first, we will reduce the average response time. Thus, SJF must be optimal.
If a long task is the first to arrive, it will be scheduled. When a short task arrives a bit later, the scheduler will preempt the current task, and start the shorter one. The remaining short tasks will be processed in order of arrival, followed by finishing the long task. What counts as “shortest” is the remaining time left on the task, not its original length. If we are one nanosecond away from finishing an hour-long task, we will minimize average response time by staying with that task, rather than preempting it for a minute-long task that just arrived on the ready queue.

A policy that addresses starvation is to schedule tasks in a round-robin fashion. With Round Robin, tasks take turns running on the processor for a limited period of time. The scheduler assigns the processor to the first task by setting a timer interrupt for some delay, called the time quantum.
At the end of the quantum, if the task hasn’t been completed, the task is preempted and the processor is given to the next task in the ready queue. The preempted task is put back on the ready queue where it can wait for its next turn. With Round Robin, there is no possibility that a task will starve it will eventually reach the front of the queue and get its time quantum.
Of course, we need to pick the time quantum carefully. One consideration
is overhead: if we have too short a time quantum, the processor will spend all
of its time switching and getting very little useful work done. But if we pick
too long a time quantum, tasks will have to wait a long time until they get A good analogy for Round Robin is a particularly hyperkinetic student, studying for multiple finals simultaneously. You won’t get much done if you read a paragraph from one textbook, then switch to reading a paragraph from the next textbook, and then switch to yet a third textbook. But if you never switch, you may never get around to studying for some of your courses

In many settings, a fair allocation of resources is as important to the design of
a scheduler as responsiveness and low overhead. On a multi-user machine or
on a server, we do not want to allow a single user to be able to monopolize the
resources of the machine, degrading service for other users. While it might seem that fairness has little value in single-user machines, individual applications are often written by different companies, each with an interest in making their application performance look good even if that comes at a cost of degrading responsiveness for other applications. Further, some applications may run inside a single process, while others may create many processes, and each process may involve multiple threads. Round robin among threads can lead to starvation if applications with only a single thread are competing with applications with hundreds of threads. We can be concerned with fair allocation at any of these levels of granularity: threads within a process for a particular user, and users sharing a physical machine. For simplicity, our discussion will assume we are interested in providing fairness among processes the same principles apply if the unit receiving resources is the user, application, or thread. Fairness is easy if all processes are compute-bound: Round Robin will give each process an equal portion of the processor. In practice, however, different processes consume resources at different rates. An I/O-bound process may need only a small portion of the processor, while a compute-bound process is willing to consume all available processor time. What is a fair allocation when there is a diversity of needs?
One possible answer is to say that whatever Round Robin does is fair after all, each process gets an equal chance at the processor. As we saw above, however, Round Robin can result in I/O-bound processes running at a much slower rate than they would if they had the processor to themselves, while compute-bound processes are barely affected at all. That hardly seems fair!

Most commercial operating systems, like Windows, macOS, and Linux,
use a scheduling algorithm called Multi-level Feedback Queue (MFQ). MFQ is
designed to achieve several simultaneous goals:
Responsiveness. Run short tasks quickly, as in SJF.
Low Overhead. Minimize the number of preemptions, as in FIFO, and
minimize the time spent making scheduling decisions.
Starvation-Freedom. All tasks should make progress, as in Round
Robin.
Background Tasks. Defer system maintenance tasks, such as disk de-
fragmentation, so they do not interfere with user work.
Fairness. Assign (non-background) processes approximately their max-
min fair share of the processor.
MFQ is an extension of Round Robin. Instead of only a single queue, MFQ
has multiple Round Robin queues, each with a different priority level and time quantum. Tasks at a higher priority level preempt lower priority tasks, while tasks at the same level are scheduled in a Round Robin fashion. Further, higher priority levels have shorter time quanta than lower levels. Tasks are moved between priority levels to favor short tasks over long ones. A new task enters the top priority level. Every time the task uses up its time quantum, it drops a level; every time the task yields the processor because it is waiting on I/O, it stays at the same level (or is bumped up a level); and if the task completes it leaves the system. A new compute-bound task will start as a high priority, but it will quickly exhaust its time quantum and fall to the next lower priority, and then the next. Thus, an I/O-bound task needing only a modest amount of computing will almost always be scheduled quickly, keeping the disk busy. Compute-bound tasks run with a long-time quantum to minimize switching overhead while still sharing the processor.
So far, the algorithm we’ve described does not achieve starvation freedom or
max-min fairness. If there are too many I/O-bound tasks, the compute-bound when running a mixture of I/O-bound and compute-bound tasks.
tasks may receive no time on the processor. To combat this, the MFQ scheduler monitors every process to ensure it is receiving its fair share of the resources. At each level, Linux actually maintains two queue tasks whose processes have already reached their fair share and are only scheduled if all other processes at that level have also received their fair share. Periodically, any process receiving less than its fair share will have its tasks increased in priority; equally, tasks that receive more than their fair share can be reduced in priority. Adjusting priority also addresses strategic behavior. From a purely selfish point of view, a task can attempt to keep its priority high by doing a short I/O request immediately before its time quantum expires. Eventually, the system will detect this and reduce its priority to its fair-share level.

Scheduling is a big topic and has a lot of concepts behind it but at the end of this article I will put some resources that will help to get more knowledge and we could talk about author resources in another day

Add a comment

Related posts:

Why The Metaverse May Be New But Not To Video Games and Video Game Creators

How often do we see two brothers representing on a podcast? Well, that is what happened on this episode of NewForum. We were honored to have the brilliant brothers Jordan Henry and Graham Henry on…

Making a Bad Day Better

These are days when we feel like we have no control. That everything is just all too much to deal with. And as moods worsen, they can spiral deeper and ruin the entire day — and days to come. The…

The Babel Fish For eCommerce Search

There was a time when I used to read quite a bit. Two young kids at home and a young, growing company fill up most of my time these days. Except for this time of the year! The last two weeks of…