Saturday 7 September 2013

How does DMA increase system concurrency

How does DMA increase system concurrency? How does it complicate
hardware design?
Answer: DMA increases system concurrency by allowing the CPU
to perform tasks while the DMA system transfers data via the system
and memory buses. Hardware design is complicated because the DMA
controller must be integrated into the system, and the system must
allow the DMA controller to be a bus master. Cycle stealing may also
be necessary to allow the CPU and DMA controller to share use of the
memory bus.

virtual memory

 Virtual memory is an illusion of extremely large memory.
When virtual memory is implemented in a computing system, there are
certain costs associated with the technique and certain benefits. List the
costs and the benefits. Is it possible for the costs to exceed the benefits?
If it is, what measures can be taken to ensure that this does not happen?
Answer: The costs are additional hardware and slower access time. The
benefits are good utilization ofmemory and larger logical address space
than physical address space.

page faults

Under what circumstances do page faults occur? Describe the actions
taken by the operating system when a page fault occurs.
Answer: A page fault occurs when an access to a page that has not been
brought into main memory takes place. The operating system verifies
the memory access, aborting the program if it is invalid. If it is valid, a
free frame is located and I/O is requested to read the needed page into
the free frame.Upon completion of I/O, the process table and page table
are updated and the instruction is restarted.

logical and physical addresses

Name two differences between logical and physical addresses.
Answer: A logical address does not refer to an actual existing address;
rather, it refers to an abstract address in an abstract address space. Contrast
thiswith a physical address that refers to an actual physical address
in memory. A logical address is generated by the CPU and is translated
into a physical address by the memory management unit(MMU). Therefore,
physical addresses are generated by the MMU.

scheduling algorithm

A CPU scheduling algorithm determines an order for the execution of its
scheduled processes. Given n processes to be scheduled on one processor,
how many possible different schedules are there? Give a formula in
terms of n.
Answer: n! (n factorial = n × n – 1 × n – 2 × ... × 2 × 1).

Define the difference between preemptive and nonpreemptive scheduling.
Answer: Preemptive scheduling allows a process to be interrupted
in the midst of its execution, taking the CPU away and allocating it
to another process. Nonpreemptive scheduling ensures that a process
relinquishes control of the CPU only when it finishes with its current
CPU burst.

user-level threads and kernel-level threads

What are two differences between user-level threads and kernel-level
threads? Under what circumstances is one type better than the other?
Answer: (1) User-level threads are unknown by the kernel,whereas the
kernel is aware of kernel threads. (2) On systems using either M:1 orM:N
mapping, user threads are scheduled by the thread library and the kernel
schedules kernel threads. (3) Kernel threads need not be associated with
a process whereas every user thread belongs to a process. Kernel threads
are generally more expensive to maintain than user threads as they must
be represented with a kernel data structure.

five services provided by an operating system

List five services provided by an operating system. Explain how each
provides convenience to the users. Explain also in which cases it would
be impossible for user-level programs to provide these services.
Answer:

a. Program execution. The operating system loads the contents (or
sections) of a file into memory and begins its execution. A userlevel
program could not be trusted to properly allocate CPU time.

b. I/O operations. Disks, tapes, serial lines, and other devices must
be communicated with at a very low level. The user need only
specify the device and the operation to perform on it, while the
system converts that request into device- or controller-specific
commands. User-level programs cannot be trusted to access only
devices they should have access to and to access them only when
they are otherwise unused.

c. File-systemmanipulation. There aremany details in file creation,
deletion, allocation, and naming that users should not have to perform.
Blocks of disk space are used by files and must be tracked.
Deleting a file requires removing the name file information and
freeing the allocated blocks. Protections must also be checked to
assure proper file access. User programs could neither ensure adherence
to protection methods nor be trusted to allocate only free
blocks and deallocate blocks on file deletion.

d. Communications. Message passing between systems requires
messages to be turned into packets of information, sent to the network
controller, transmitted across a communications medium,
and reassembled by the destination system. Packet ordering and
data correction must take place. Again, user programs might not
coordinate access to the network device, or they might receive
packets destined for other processes.

e. Error detection. Error detection occurs at both the hardware and
software levels. At the hardware level, all data transfers must be
inspected to ensure that data have not been corrupted in transit.
All data on media must be checked to be sure they have not
changed since they were written to the media. At the software
level, media must be checked for data consistency; for instance,
whether the number of allocated and unallocated blocks of storage
match the total number on the device. There, errors are frequently
process-independent (for instance, the corruption of data on a
disk), so there must be a global program (the operating system)
that handles all types of errors. Also, by having errors processed
by the operating system, processes need not contain code to catch
and correct all the errors possible on a system.

system calls

What system calls have to be executed by a command interpreter or shell
in order to start a new process?
Answer: In Unix systems, a fork system call followed by an exec system
call need to be performed to start a new process. The fork call clones the
currently executing process, while the exec call overlays a new process
based on a different executable over the calling process.

command interpreter

What is the purpose of the command interpreter? Why is it usually
separate from the kernel?
Answer: It reads commands from the user or from a file of commands
and executes them, usually by turning them into one or more system
calls. It is usually not part of the kernel since the command interpreter
is subject to changes.

Cache memory:

 is random access memory (RAM) that a computer microprocessor
can access more quickly than it can access regular RAM. As the microprocessor
processes data, it looks first in the cache memory and if it finds the data
there (from a previous reading of data), it does not have to do the more
time-consuming reading of data from larger memory. 

Oracle Reserved Words

Oracle  Reserved Words The following words are reserved by Oracle. That is, they have a special meaning to Oracle and so cannot be redefi...