Operating systems concepts 7th edition answers
B-trees: This a technique for organizing indexes into files and databases that is commonly used in OS file systems, including those supported by Mac OS X, Windows, and several Linux file systems. B-trees are now covered in Chapter Student study aids: Each chapter now begins with a list of learning objectives. In addition, a chapter-by-chapter set of review outlines highlights key concepts that the student should concentrate on in each chapter.
See later in this Preface for details. Sample Syllabus: The text contains more material than can be conveniently covered in one semester. Accordingly, instructors are provided with several sample syllabi that guide the use of the text within limited time e. These samples are based on real-world experience by professors with the sixth edition.
Share a link to All Resources. Websites and online courses. Other Student Resources. Course Resources. Instructors, you may still place orders with your bookstore. About the Author s. Previous editions. Sign In We're sorry! Username Password Forgot your username or password? Sign Up Already have an access code? Instructor resource file download The work is protected by local and international copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Signed out You have successfully signed out and will be required to sign back in should you need to download more resources. Instructors, sign in here to see net price. This item is currently unavailable for purchase on our websites. Unfortunately, it is difficult to debug problems within the kernel due to the fluidity of the code. Also, such compilation is system specific, making Synthesis difficult to port a new compiler must be written for each architecture. This program works by first prompting the user for the name of the source and destination files.
Be sure to include all necessary error checking, including ensuring that the source file exists. Once you have correctly designed and tested the program, if you used a system that supports it, run the program using a utility that traces system calls. Linux systems provide the ptrace utility, and Solaris systems use the truss or dtrace command. On Mac OS X, the ktrace facility provides similar functionality. Answer: Please refer to the supporting Web site for solution.
A process is is a program in execution and is the unit of work in a modern time-sharing system. Such a system consists of a collection of processes: Operating-system processes executing system code and user processes executing user code.
By switching the CPU between processes, the oper- ating system can make the computer more productive. We also introduce the notion of a thread lightweight process and interprocess communication IPC.
Threads are discussed in more detail in Chapter 4. Exercises 3. A swapping scheme is implemented to remove partially run programs from memory and reinstate them later to continue where they left off. The primary difference is in the frequency of their execution. The short- term must select a new process quite often. Long-term is used much less often since it handles placing jobs in the system and may wait a while for a job to finish before it admits another one.
Answer: In general, the operating system must save the state of the currently running process and restore the state of the process sched- uled to be run next. Saving the state of a process typically includes the values of all the CPU registers in addition to memory allocation. Con- text switches must also perform many architecture-specific operations, including flushing data and instruction caches.
Describe the undesirable circumstances that could arise from not enforcing either the "at most once" or "exactly once" semantics. Describe possible uses for a mechanism that had neither of these guarantees. Consider if a remote procedure were withdrawing money from a bank account on a system that did not support these semantics. It is possible that a single invocation of the remote procedure might lead to multiple withdrawals on the server.
For a system to support either of these semantics generally requires the server maintain some form of client state such as the timestamp described in the text. If a system were unable to support either of these sematics, then such a system could only safely provide remote procedures that do not alter data or provide time-sensitive results. However, an inquiry into an account balance or other accunt information such as name, address, etc. Answer: Please refer to the supporting Web site for source code solution.
Symmetric and asymmetric communication b. Automatic and explicit buffering c. Send by copy and send by reference d. Fixed-sized and variable-sized messages Answer: a. Symmetric and asymmetric communication - A benefit of sym- metric communication is that it allows a rendezvous between the sender and receiver.
As a result, message-passing systems often provide both forms of synchronization. Automatic and explicit buffering - Automatic buffering provides a queue with indefinite length; thus ensuring the sender will never have to block while waiting to copy a message.
There are no spec- ifications how automatic buffering will be provided; one scheme may reserve sufficiently large memory where much of the mem- ory is wasted.
Explicit buffering specifies how large the buffer is. In this situation, the sender may be blocked while waiting for available space in the queue. However, it is less likely memory will be wasted with explicit buffering. Send by copy and send by reference - Send by copy does not allow the receiver to alter the state of the parameter; send by ref- erence does allow it.
A benefit of send by reference is that it allows the programmer to write a distributed version of a centralized ap- plication.
Fixed-sized and variable-sized messages - The implications of this are mostly related to buffering issues; with fixed-size mes- sages, a buffer with a specific size can hold a known number of messages. The number of variable-sized messages that can be held by such a buffer is unknown.
Larger mes- sages i. The number of the sequence will be provided in the command line. For example, if 5 is provided, the first five numbers in the Fibonacci sequence will be output by the child process.
Because the parent and child processes have their own copies of the data, it will be necessary for the child to output the sequence. Have the parent invoke the wait call to wait for the child process to complete before exiting the program.
Perform necessary error checking to ensure that a non-negative number is passed on the command line. It is this separate program that will run as a child process outputting the Fibonacci sequence. Per- form necessary error checking to ensure that a non-negative number is passed on the command line. Allow the fortunes to contain multiple lines. The date client shown in Figure 3. For example, if a client sends the server the string Hello there!
This server will wait for a client connection using the accept method. The server will break out of the loop only when it has determined that the client has closed the connection. The date server shown in Figure 3. BufferedReader class. BufferedReader extends the java. Reader class, which is used for reading character streams. However, the echo server cannot guarantee that it will read characters from clients; it may receive binary data as well. The class java. InputStream deals with data at the byte level rather than the character level.
Thus, this echo server must use an object that extends java. The read method in the java. Another approach to designing this program is to establish a shared-memory segment between the parent and child processes. This technique allows the child to write the contents of the Fibonacci sequence to the shared- memory segment and has the parent output the sequence when the child completes.
Because the memory is shared, any changes the child makes to the shared memory will be reflected in the parent process as well. The program first requires creating the data structure for the shared-memory segment. This is most easily accom- plished using a struct. Create a shared-memory segment of size shared data.
Attach the shared-memory segment to its address space. Set the value of sequence size to the parameter on the command line. Fork the child process and invoke the wait system call to wait for the child to finish. Output the value of the Fibonacci sequence in the shared-memory segment. Detach and remove the shared-memory segment. The child process will then write the Fibonacci sequence to shared memory and finally will detach the segment. One issue of concern with cooperating processes involves synchro- nization issues.
In this exercise, the parent and child processes must be synchronized so that the parent does not output the Fibonacci sequence until the child finishes generating the sequence.
These two processes will be synchronized using the wait system call; the parent process will invoke wait , which will cause it to be suspended until the child process exits. This com- mand lists the status of various POSIX interprocess communication mech- anisms, including shared-memory segments.
Permissions are identified according to the following: Mode Meaning Read permission of owner. Shared-memory segments can be identified according to a user-specified key or according to the integer value returned from the shmget system call, which represents the integer identifier of the shared-memory seg- ment created. Write a C program that is passed an identifier for a shared-memory segment. This program will invoke the shmctl function to obtain its shm ds structure.
Many modern operating systems now provide features for a process to contain multiple threads of control. This chapter introduces many concepts associated with multithreaded computer systems and covers how to use Java to create and manipulate threads. We have found it especially useful to discuss how a Java thread maps to the thread model of the host operating system.
Exercises 4. An example of this is a program that calculates an in- dividual tax return. Such a program must closely monitor its own working space such as open files, environment variables, and current working directory. Answer: Context switching between user threads is quite similar to switching between kernel threads, although it is dependent on the threads library and how it maps user threads to kernel threads.
In general, context switching between user threads involves taking a user thread of its LWP and replacing it with another thread.
This act typically involves saving and restoring the state of the registers. A single-threaded process, on the other hand, will not be capable of performing useful work when a page fault takes place. Therefore, in scenarios where a program might suffer from frequent page faults or has to wait for other system events, a multi-threaded solution would perform better even on a single-processor system.
Register values b. Heap memory c. Global variables d. Stack memory Answer: The threads of a multithreaded process share heap memory and global variables. Each thread has its separate set of register values and a separate stack. Answer: A multithreaded system comprising of multiple user-level threads cannot make use of the different processors in a multiprocessor system simultaneously. The operating system sees only a single process and will not schedule the different threads of the process on separate processors.
Consequently, there is no performance benefit associated with executing multiple user-level threads on a multiprocessor system. Instead, Linux treats both in the same way, allowing a task to be more akin to a process or a thread depending on the set of flags passed to the clone system call. However, many operating sys- tems—such as Windows XP and Solaris—treat processes and threads differently.
Typically, such systems use a notation wherein the data struc- ture for a process contains pointers to the separate threads belonging to the process. Contrast these two approaches for modeling processes and threads within the kernel. Answer: On one hand, in systems where processes and threads are considered as similar entities, some of the operating system code could be simplified.
A scheduler, for instance, can consider the different pro- cesses and threads in equal footing without requiring special code to examine the threads associated with a process during every scheduling step.
On the other hand, this uniformity could make it harder to impose process-wide resource constraints in a direct manner.
Instead, some ex- tra complexity is required to identify which threads correspond to which process and perform the relevant accounting tasks. Let the number of user-level threads in the program be more than the number of processors in the system.
Discuss the performance implications of the following scenarios. The number of kernel threads allocated to the program is less than the number of processors.
The number of kernel threads allocated to the program is equal to the number of processors. The number of kernel threads allocated to the program is greater than the number of processors but less than the number of user- level threads. Answer: When the number of kernel threads is less than the number of processors, then some of the processors would remain idle since the scheduler maps only kernel threads to processors and not user-level threads to processors.
When the number of kernel threads is exactly equal to the number of processors, then it is possible that all of the processors might be utilized simultaneously.
However, when a kernel- thread blocks inside the kernel due to a page fault or while invoking system calls , the corresponding processor would remain idle. When there are more kernel threads than processors, a blocked kernel thread could be swapped out in favor of another kernel thread that is ready to execute, thereby increasing the utilization of the multiprocessor system.
This program should work as follows: The user will run the program and will enter a number on the command line. The program will then create a separate thread that outputs all the prime numbers less than or equal to the number entered by the user. This program should work as follows: The user will enter on the command line the number of Fibonacci numbers that the program is to generate.
When the thread finishes execution, the parent thread will output the sequence generated by the child thread. Because the parent thread cannot begin outputting the Fi- bonacci sequence until the child thread finishes, this will require having the parent thread wait for the child thread to finish using the techniques described in Section 4. Answer: Please refer to the supporting Web site for source code solu- tion. However, this server is single-threaded meaning the server cannot respond to concurrent echo clients until the current client exits.
Modify the solution to Exercise 3. By switch- ing the CPU among processes, the operating system can make the computer more productive. In this chapter, we introduce the basic scheduling concepts and discuss in great length CPU scheduling. This is their first exposure to the idea of resource allocation and scheduling, so it is important that they understand how it is done. Gantt charts, simulations, and play acting are valuable ways to get the ideas across.
Show how the ideas are used in other situations like waiting in line at a post office, a waiter time sharing between customers, even classes being an interleaved Round-Robin scheduling of professors. A simple project is to write several different CPU schedulers and compare their performance by simulation. The instructor can make the trace tape up in advance to provide the same data for all students.
The first line of a job was the word JOB and the job number. The job was terminated by an END line with the job number again. Round-Robin is more difficult, since it requires putting unfinished requests back in the ready queue. Exercises 5. Such programs typically do not use up their entire CPU quantum. CPU utilization and response time b.
Average turnaround time and maximum waiting time c. The context switching overheads could be lowered by performing context switches infrequently. This could however result in increasing the response time for processes.
Such a scheduling policy could however starve long-running tasks and thereby increase their waiting time. What are the implications of assigning the following values to the parameters used by the algorithm?
Consequently, the scheduling algorithm is almost memory-less, and simply predicts the length of the previous burst for the next quantum of CPU execution. What is the turnaround time of each process for each of the scheduling algorithms in part a? What is the waiting time of each process for each of the scheduling algorithms in part a? Which of the schedules in part a results in the minimal average waiting time over all processes?
The four Gantt charts are b. Shortest Job First 5. First-come, first-served b. Shortest job first c. Priority Answer: Shortest job first and priority-based scheduling algorithms could result in starvation. What would be the effect of putting two pointers to the same process in the ready queue? What would be the major advantages and disadvantages of this scheme? How would you modify the basic RR algorithm to achieve the same effect without the duplicate pointers? In effect, that process will have increased its priority since by getting time more often it is receiving preferential treatment.
The advantage is that more important jobs could be given more time, in other words, higher priority in treatment. The conse- quence, of course, is that shorter jobs will suffer. Allot a longer amount of time to processes deserving higher pri- ority. In other words, have two or more quantums possible in the Round-Robin scheme. Also assume that the context switching overhead is 0. What is the CPU utilization for a round-robin scheduler when: a. The time quantum is 1 millisecond b.
Exercises 31 5. Answer: The program could maximize the CPU time allocated to it by not fully utilizing its time quantums. It could use a large fraction of its assigned quantum, but relinquish the CPU before the end of the quantum, thereby increasing the priority associated with the process.
Larger priority numbers imply higher priority. All processes are given a priority of 0 when they enter the ready queue. FCFS b. LIFO 5. Multilevel feedback queues Answer: a. FCFS —discriminates against short jobs since any short jobs arriv- ing after long jobs will have a longer waiting time.
RR —treats all jobs equally giving them equal bursts of CPU time so short jobs will be able to leave the system faster since they will finish first. Multilevel feedback queues—work similar to the RR algorithm— they discriminate favorably toward short jobs.
What is the time quantum in milliseconds for a thread with priority 10? With priority 55? Assume a thread with priority 35 has used its entire time quantum without blocking. What new priority will the scheduler assign this thread? What will be the new priorities for these three processes when priorities are recalculated? Answer: The priorities assigned to the processes are 80, 69, and 65 respectively.
The scheduler lowers the relative priority of CPU-bound processes. An understanding of these problems and their solutions is part of current operating-system theory and development.
We first use semaphores and monitors to introduce synchronization tech- niques. Next, Java synchronization is introduced to further demonstrate a language-based synchronization technique.
We conclude with a discussion of how contemporary operating systems provide features for process synchro- nization and thread safety. Exercises 6. Prove that the algorithm satisfies all three requirements for the critical-section problem. Answer: This algorithm satisfies the three conditions of mutual ex- clusion. If both processes set their flag to true, only one will succeed. Namely, the process whose turn it is.
The waiting process can only enter its critical section when the other process updates the value of turn. This algorithm does not provide strict alternation. It only sets turn to the value of the other process upon exiting its critical section. If this process wishes to enter its critical section again - before the other process - it repeats the process of entering its critical section and setting turn to the other process upon exiting.
Assume two processes wish to enter their respec- tive critical sections. They both set their value of flag to true, however only the thread whose turn it is can proceed, the other thread waits. If bounded waiting were not preserved, it would therefore be possible that the waiting process would have to wait indefinitely while the first process repeatedly entered - and exited - its critical section.
The structure of process Pi is shown in Figure 6. Answer: This algorithm satisfies the three conditions. Before we show that the three conditions are satisfied, we give a brief explanation of what the algorithm does to ensure mutual exclusion. When a process i requires access to critical section, it first sets its flag variable to want in to indicate its desire.
It then performs the following steps: 1 It ensures that all processes whose index lies between turn and i are idle. Since the process sets its own flag variable set to in cs before checking the status of other processes, we are guaranteed that no two processes will enter the critical section simultaneously. When this happens, all processes realize that there are competing processes, enter the next iteration of the outer while 1 loop and reset their flag variables to want in.
Now the only process that will set its turn variable to in cs is the process whose index is closest to turn. It is however possible that new processes whose index values are even closer to turn might decide to enter the critical section at this point and therefore might be able to simultaneously set its flag to in cs.
These processes would then realize there are competing processes and might restart the process of entering the critical section. However, at each iteration, the index values of processes that set their flag variables to in cs become closer to turn and eventually we reach the following condition: only one process say k sets its flag to in cs and no other process whose index lies between turn and k has set its flag to in cs.
This process then gets to enter the critical section. Therefore, any process whose index does not lie between turn and k cannot enter the critical section. In the meantime, all processes whose index falls between turn and k and desire to enter the critical section would indeed enter the critical section due to the fact that the system always makes progress and the turn value monotonically becomes closer to k. Eventually, either turn becomes k or there are no processes whose index values lie between turn and k, and therefore process k gets to enter the critical section.
What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Answer: Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop without relinquish the processor.
Alternatively, a process could wait by relinquishing the processor, and block on a condition and wait to be awakened at some appropriate time in the future. Busy waiting can be avoided but incurs the overhead associated with putting a process to sleep and having to wake it up when the appropriate program state is reached. Answer: Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock could be obtained only by executing a different process.
If the process is not relinquishing the processor, other processes do not get the opportu- nity to set the program condition required for the first process to make progress. In a multiprocessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock. Answer: If a user-level program is given the ability to disable interrupts, then it can disable the timer interrupt and prevent context switching from taking place, thereby allowing it to use the processor without letting other processes to execute.
Answer: Interrupts are not sufficient in multiprocessor systems since disabling interrupts only prevents other processes from executing on the processor in which interrupts were disabled; there are no limitations on what processes could be executing on other processors and therefore the process disabling interrupts cannot guarantee mutually exclusive access to program state. For example, a server may wish to have only N socket connections at any point in time.
As soon as N connections are made, the server will not accept another incoming connection until an existing connection is re- leased. Explain how semaphores can be used by a server to limit the number of concurrent connections. Answer: A semaphore is initialized to the number of allowable open socket connections.
When a connection is accepted, the acquire method is called, when a connection is released, the release method is called. Answer: A wait operation atomically decrements the value associated with a semaphore. If two wait operations are executed on a semaphore when its value is 1, if the two operations are not performed atomically, then it is possible that both operations might proceed to decrement the semaphore value thereby violating mutual exclusion.
The solution should exhibit minimal busy waiting. A barbershop consists of a waiting room with n chairs and a barber room with one barber chair. If there are no customers to be served, the barber goes to sleep.
If a customer enters the barbershop and all chairs are occupied, then the customer leaves the shop. If the barber is busy but chairs are available, then the customer sits in one of the free chairs.
If the barber is asleep, the customer wakes up the barber. Write a program to coordinate the barber and the customers.
Each condition variable is represented by a queue of threads waiting for the condition. Each thread has a semaphore associated with its queue entry. When a thread performs a wait operation, it creates a new semaphore initialized to zero , appends the semaphore to the queue associated with the condition variable, and performs a blocking semaphore decrement operation on the newly created semaphore.
When a thread performs a signal on a condition variable, the first process in the queue is awakened by performing an increment on the corresponding semaphore. Explain why this is true. Design a new scheme that is suitable for larger portions. These copy operations could be expensive if one were using large extents of memory for each buffer region.
The increased cost of copy operation means that the monitor is held for a longer period of time while a process is in the produce or consume operation. This decreases the overall throughput of the system. This problem could be alleviated by storing pointers to buffer regions within the monitor instead of storing the buffer regions themselves. This operation should be relatively inexpensive and therefore the period of time that the monitor is being held will be much shorter, thereby increasing the throughput of the monitor.
Propose a method for solving the readers- writers problem without causing starvation. Answer: Throughput in the readers-writers problem is increased by favoring multiple readers as opposed to allowing a single writer to exclusively access the shared values. On the other hand, favoring readers could result in starvation for writers. The starvation in the readers- writers problem could be avoided by keeping timestamps associated with waiting processes.
When a writer is finished with its task, it would wake up the process that has been waiting for the longest duration. When a reader arrives and notices that another reader is accessing the database, then it would enter the critical section only if there are no waiting writers.
These restrictions would guarantee fairness. Answer: The signal operations associated with monitors is not persistent in the following sense: if a signal is performed and if there are no waiting threads, then the signal is simply ignored and the system does not remember the fact that the signal took place. If a subsequent wait operation is performed, then the corresponding thread simply blocks.
A future wait operation would immediately succeed because of the earlier increment. Suggest how the implementation described in Section 6. Answer: If the signal operation were the last statement, then the lock could be transferred from the signalling process to the process that is the recipient of the signal.
Otherwise, the signalling process would have to explicitly release the lock and the recipient of the signal would have to compete with all other processes to obtain the lock to make progress. Write a monitor that allocates three identical line printers to these processes, using the priority numbers for deciding the order of allocation. Write a monitor to coordinate access to the file.
How would the solution to the preceding exercise differ with the two different ways in which signaling can be performed? Answer: The solution to the previous exercise is correct under both situations. However, it could suffer from the problem that a process might be awakened only to find that it is still not possible for it to make forward progress either because there was not sufficient slack to begin with when a process was awakened or if an intervening process gets control, obtains the monitor and starts accessing the file.
Also, note that the broadcast operation wakes up all of the waiting processes. If the signal also transfers control and the monitor from the current thread to the target, then one could check whether the target would indeed be able to make forward progress and perform the signal only if it it were possible.
Write a monitor using this scheme to implement the readers— writers problem. Explain why, in general, this construct cannot be implemented efficiently.
What restrictions need to be put on the await statement so that it can be implemented efficiently? Hint: Restrict the generality of B; see Kessels []. This requires considerable complexity as well as might require some interaction with the compiler to evaluate the conditions at different points in time.
One could restrict the boolean condition to be a disjunction of conjunctions with each component being a simple check equality or inequality with respect to a static value on a program variable. In that case, the boolean condition could be communicated to the runtime system, which could perform the check every time it needs to determine which thread to be awakened.
You may assume the existence of a real hardware clock that invokes a procedure tick in your monitor at regular intervals. Answer: Solaris, Linux, and Windows use spinlocks as a syn- chronization mechanism only on multiprocessor systems.
Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock could be obtained only by executing a different process. In a multipro- cessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock. Why is this restriction necessary? Answer: If the transaction needs to be aborted, then the values of the updated data values need to be rolled back to the old values.
This requires the old values of the data entries to be logged before the updates are performed. Answer: A schedule refers to the execution sequence of the operations for one or more transactions.
A serial schedule is the situation where each transaction of a schedule is performed atomically. If a schedule consists of two different transactions where consecutive operations from the different transactions access the same data and at least one of the operations is a write, then we have what is known as a conflict.
If a schedule can be transformed into a serial schedule by a series of swaps on nonconflicting operations, we say that such a schedule is conflict serializable. The two-phase locking protocol ensures conflict serializabilty because exclusive locks which are used for write operations must be acquired serially, without releasing any locks during the acquire growing phase.
Other transactions that wish to acquire the same locks must wait for the first transaction to begin releasing locks. By requiring that all locks must first be acquired before releasing any locks, we are ensuring that potential conflicts are avoided. How does the system process transactions that were issued after the rolled-back transaction but that have timestamps smaller than the new timestamp of the rolled-back transaction?
Answer: If the transactions that were issued after the rolled-back trans- action had accessed variables that were updated by the rolled-back trans- action, then these transactions would have to rolled-back as well. If they have not performed such operations that is, there is no overlap with the rolled-back transaction in terms of the variables accessed , then these operations are free to commit when appropriate.
Processes may ask for a number of these resources and —once finished —will return them. As an example, many commercial software packages provide a given number of licenses, indicating the number of applications that may run concurrently. When the application is started, the license count is decremented. When the application is terminated, the license count is incremented. If all licenses are in use, requests to start the application are denied.
Such requests will only be granted when an existing license holder terminates the application and a license is returned. Do the fol- lowing: a. Identify the data involved in the race condition. Identify the location or locations in the code where the race condition occurs. Using a semaphore, fix the race condition. This will allow a process to invoke decrease count by simply calling decrease count count ; The process will only return from this function call when sufficient resources are available.
It is important that the students learn the three basic approaches to deadlock: prevention, avoidance, and detection although the terms prevention and avoidance are easy to confuse.
It can be useful to pose a deadlock problem in human terms and ask why human systems never deadlock. Can the students transfer this understanding of human systems to computer systems? Projects can involve simulation: create a list of jobs consisting of requests and releases of resources single type or multiple types.
Ask the students to al- locate the resources to prevent deadlock. The survey paper by Coffman, Elphick, and Shoshani [] is good sup- plemental reading, but you might also consider having the students go back to the papers by Havender [], Habermann [], and Holt [a]. The last two were published in CACM and so should be readily available. Exercises 7. Show that the four necessary conditions for deadlock indeed hold in this example. State a simple rule for avoiding deadlocks in this system.
The four necessary conditions for a deadlock are 1 mutual exclu- sion; 2 hold-and-wait; 3 no preemption; and 4 circular wait. The mutual exclusion condition holds as only one car can occupy a space in the roadway. A car cannot be removed i. Lastly, there is indeed a circular wait as each car is waiting for a subsequent car to advance.
The circular wait condition is also easily observed from the graphic. A simple rule that would avoid this traffic deadlock is that a car may not advance into an intersection if it is clear they will not be able to immediately clear the intersection. Dis- cuss how the four necessary conditions for deadlock indeed hold in this setting. Discuss how deadlocks could be avoided by eliminating any one of the four conditions.
Answer: Deadlock is possible because the four necessary conditions hold in the following manner: 1 mutual exclusion is required for chop- sticks, 2 the philosophers tend to hold onto the chopstick in hand while they wait for the other chopstick, 3 there is no preemption of chopsticks in the sense that a chopstick allocated to a philosopher cannot be forcibly taken away, and 4 there is a possibility of circular wait.
Deadlocks could be avoided by overcoming the conditions in the following manner: 1 allow simultaneous sharing of chopsticks, 2 have the philosophers relin- quish the first chopstick if they are unable to obtain the other chopstick, 3 allow for chopsticks to be forcibly taken away if a philosopher has had a chopstick for a long period of time, and 4 enforce a numbering of the chopsticks and always obtain the lower numbered chopstick before obtaining the higher numbered one.
Exercises 49 7. Such synchronization objects may include mutexes, semaphores, condition variables, etc. We can prevent the deadlock by adding a sixth object F. Compare this scheme with the circular-wait scheme of Section 7.
Answer: This is probably not a good solution because it yields too large a scope. It is better to define a locking policy with as narrow a scope as possible. Hashed Page Tables 8.
Inverted Page Tables 8. Segmentation 8. Hardware 8. Example: The Intel Pentium 8. Pentium Segmentation 8. Pentium Paging 8. Linux on Pentium Systems 8. Summary 8. Exercises 8. Bibliographical Notes 9. Virtual Memory 9. Background 9. Demand Paging 9. Basic Concepts 9. Performance of Demand Paging 9. Copy-on-Write 9. Page Replacement 9. Basic Page Replacement 9. Optimal Page Replacement 9.
LRU Page Replacement 9. Additional-Reference-Bits Algorithm 9. Second-Chance Algorithm 9. Enhanced Second-Chance Algorithm 9. Counting-Based Page Replacement 9.
Page-Buffering Algorithms 9. Applications and Page Replacement 9. Allocation of Frames 9. Minimum Number of Frames 9. Allocation Algorithms 9. Global versus Local Allocation 9. Thrashing 9. Cause of Thrashing 9. Working-Set Model 9. Page-Fault Frequency 9. Memory-Mapped Files 9.
Basic Mechanism 9. Allocating Kernel Memory 9. Buddy System 9. Slab Allocation 9. Other Considerations 9. Prepaging 9. Page Size 9. TLB Reach 9. Inverted Page Tables 9. Program Structure 9. Operating-System Examples 9. Windows XP 9. Solaris 9. Summary 9. Exercises 9. Bibliographical Notes IV. Storage Management File-System Interface File Concept File Attributes File Operations File Types File Structure Internal File Structure Access Methods Sequential Access Direct Access Other Access Methods Directory Structure Storage Structure Directory Overview Single-Level Directory Two-Level Directory Tree-Structured Directories Acyclic-Graph Directories General Graph Directory File-System Mounting File Sharing Multiple Users Remote File Systems The Client—Server Model Distributed Information Systems Failure Modes Consistency Semantics UNIX Semantics Session Semantics Immutable-Shared-Files Semantics Protection Types of Access Access Control Other Protection Approaches Summary Exercises Bibliographical Notes File-System Implementation File-System Structure Overview Partitions and Mounting Virtual File Systems Directory Implementation Linear List Hash Table Allocation Methods Contiguous Allocation Linked Allocation Indexed Allocation Performance Free-Space Management Bit Vector Linked List Grouping Counting Efficiency and Performance Efficiency Recovery Consistency Checking Backup and Restore Log-Structured File Systems NFS The Mount Protocol The NFS Protocol Path-Name Translation Remote Operations Mass-Storage Structure Overview of Mass-Storage Structure Magnetic Disks Magnetic Tapes Disk Structure Disk Attachment Host-Attached Storage Network-Attached Storage Storage-Area Network Disk Scheduling FCFS Scheduling SSTF Scheduling SCAN Scheduling LOOK Scheduling Selection of a Disk-Scheduling Algorithm Disk Management Disk Formatting Boot Block Bad Blocks Swap-Space Management Swap-Space Use Swap-Space Location Swap-Space Management: An Example RAID Structure Improvement of Reliability via Redundancy Improvement in Performance via Parallelism RAID Levels Extensions Problems with RAID Stable-Storage Implementation Tertiary-Storage Structure Tertiary-Storage Devices Removable Disks Tapes Future Technology Operating-System Support Application Interface File Naming Hierarchical Storage Management Performance Issues Speed Reliability Cost Polling Interrupts Direct Memory Access Block and Character Devices Network Devices Clocks and Timers Blocking and Nonblocking IO Buffering Caching Spooling and Device Reservation Error Handling Kernel Data Structures Bibliographical Notes V.
Protection and Security Goals of Protection Principles of Protection Domain of Protection Domain Structure Access Matrix Implementation of Access Matrix Global Table Access Lists for Objects Capability Lists for Domains A Lock—Key Mechanism Comparison Revocation of Access Rights Capability-Based Systems An Example: Hydra Language-Based Protection Compiler-Based Enforcement Protection in Java Security The Security Problem Program Threats Trojan Horse Trap Door Logic Bomb Stack and Buffer Overflow Viruses System and Network Threats Worms Port Scanning Denial of Service Cryptography as a Security Tool Encryption Symmetric Encryption Asymmetric Encryption Authentication Key Distribution Implementation of Cryptography An Example: SSL User Authentication Passwords Password Vulnerabilities Encrypted Passwords One-Time Passwords Biometrics Implementing Security Defenses Security Policy Vulnerability Assessment Intrusion Detection Virus Protection Auditing, Accounting, and Logging Firewalling to Protect Systems and Networks Computer-Security Classifications An Example: Windows XP Bibliographical Notes VI.
Distributed Systems Distributed System Structures Motivation Resource Sharing Computation Speedup Communication Types of Distributed Operating Systems Network Operating Systems Remote Login Remote File Transfer Distributed Operating Systems Data Migration Computation Migration Process Migration Network Structure Local-Area Networks Wide-Area Networks Network Topology Communication Structure Naming and Name Resolution
0コメント