OPERATING SYSTEM
An operating
system (OS) is system
software that manages computer
hardware and software resources
and provides common services for computer
programs. All computer
programs, excluding firmware,
require an operating system to function.
Time-sharing operating
systems schedule tasks for efficient use of the system and may also include
accounting software for cost allocation of processor time, mass storage,
printing, and other resources.
For hardware
functions such as input and output and memory
allocation, the operating system acts as an intermediary
between programs and the computer hardware ,although the application code is
usually executed directly by the hardware and frequently makes system calls to
an OS function or is interrupted by it. Operating systems are found on many
devices that contain a computer – from cellular
phones and video game
consoles to web servers and supercomputers.
Batch operating system
The users of a batch operating system do not interact with the computer directly. Each user prepares his job on an off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar needs are batched together and run as a group. The programmers leave their programs with the operator and the operator then sorts the programs with similar requirements into batches.
The problems with Batch Systems are as follows
- Lack of interaction between the user and the job.
- CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
- Difficult to provide the desired priority
Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time which is shared among multiple users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of Multi programmed batch systems, the objective is to maximize processor use, whereas in Time-Sharing Systems, the objective is to minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receive an immediate response. For example, in a transaction processing, the processor executes each user program in a short burst or quantum of computation. That is, if nusers are present, then each user can get a time quantum. When the user submits the command, the response time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time. Computer systems that were designed primarily as batch systems have been modified to time-sharing systems.
Advantages of Time sharing operating systems are as follows −
- Provides the advantage of quick response.
- Avoids duplication of software.
- Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
- Problem of reliability.
- Question of security and integrity of user programs and data.
- Problem of data communication.
Distributed operating System
Distributed systems use multiple central processors to serve multiple real-time applications and multiple users. Data processing jobs are distributed among the processors accordingly.
The processors communicate with one another through various communication lines (such as high-speed buses or telephone lines). These are referred asloosely coupled systems or distributed systems. Processors in a distributed system may vary in size and function. These processors are referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
- With resource sharing facility, a user at one site may be able to use the resources available at another.
- Speedup the exchange of data with one another via electronic mail.
- If one site fails in a distributed system, the remaining sites can potentially continue operating.
- Better service to the customers.
- Reduction of the load on the host computer.
- Reduction of delays in data processing.
Network operating System
A Network Operating System runs on a server and provides the server the capability to manage data, users, groups, security, applications, and other networking functions. The primary purpose of the network operating system is to allow shared file and printer access among multiple computers in a network, typically a local area network (LAN), a private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
The advantages of network operating systems are as follows −
- Centralized servers are highly stable.
- Security is server managed.
- Upgrades to new technologies and hardware can be easily integrated into the system.
- Remote access to servers is possible from different locations and types of systems.
The disadvantages of network operating systems are as follows −
- High cost of buying and running a server.
- Dependency on a central location for most operations.
- Regular maintenance and updates are required.
Real Time operating System
A real-time system is defined as a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. The time taken by the system to respond to an input and display of required updated information is termed as the response time. So in this method, the response time is very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow of data and real-time systems can be used as a control device in a dedicated application. A real-time operating system must have well-defined, fixed time constraints, otherwise the system will fail. For example, Scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
UTILITY SOFTWARE
Any software that
performs some specific task that is secondary to the main purpose of using the
computer (the latter would be called application programs) but is not
essential to the operation of the computer (system software).
Many utilities could be considered as part of the system software, which can in turn be considered part of the operating system.
Many utilities could be considered as part of the system software, which can in turn be considered part of the operating system.
OR
Utility software is used to perform basic maintenance tasks on a computer.Examples include disk utilities like fragmentation, compressors and cleaners. There are also operating system utilities such as antivirus programs, registry cleaners and system restoration programs. Internet and network connection is managed by variety of small software utilities, including firewall programs, while program installation and removal is achieved by package managers and installation clients
COMPUTER INTERPRETER,COMPILER,LOADER,LINKER
Interpreter:
An
interpreter is a program that reads and
executes code. This includes source code,
pre-compiled code, and scripts.
Common interpreters include Perl, Python,
and Ruby interpreters, which
execute Perl, Python, and Ruby code respectively.
Interpreters
and compilers are similar,
since they both recognize and process source code. However, a compiler does not
execute the code like and interpreter does. Instead, a compiler simply converts
the source code into machine code, which can be run directly by the operating system as an
executable program.
Interpreters bypass the compilation process
and execute the code directly.
Since interpreters read and
execute code in a single step, they are useful for running scripts and other
small programs. Therefore, interpreters are commonly installed on Web servers,which allows developers to run executable scripts within
their webpages. These scripts can be easily edited and saved without the need
to recompile the code…..
compiler:
A compiler
is a special program that processes statements written in a particular
programming language and turns them into machine language or "code"
that a computer's processor uses. Typically, a programmer writes language statements in a
language such as Pascal or C one line at a time using an editor. The file that is
created contains what are called the source
statements. The programmer then runs the appropriate language compiler,
specifying the name of the file that contains the source statements
When
executing (running), the compiler first parses (or analyzes) all of the
language statements syntactically one after the other and then, in one or more
successive stages or "passes", builds the output code, making sure
that statements that refer to other statements are referred to correctly in the
final code. Traditionally, the output of the compilation has been called object code or sometimes an object module . (Note that the term
"object" here is not related to object-oriented programming.) The object code is machine code that the processor can execute one instruction at a
time.
linker is a computer
program that takes one or more object files generated by a compiler and combines them
into one, executable program.
Computer programs are usually made
up of multiple modules that span separate object files,
each being a compiled computer program. The program as a whole refers to these
separately-compiled object files using symbols. The linker combines these
separate files into a single, unified program; resolving the symbolic
references as it goes along.
Dynamic linking is a similar process, available on
many operating systems, which postpones the resolution of some symbols until
the program is executed. When the program is run, these dynamic link libraries are loaded as well. Dynamic linking
does not require a linker.
loader:
In
a computer operating system , a loader is a component that
locates a given program (which can be an application or, in some cases, part of the
operating system itself) in offline storage (such as a hard disk ), loads it into main storage (in a personal computer, it's called
random ), and gives that program
control of the computer (allows it to execute its instructions).
A
program that is loaded may itself contain components that are not initially
loaded into main storage, but can be loaded if and when their logic is needed.
In a multitasking
operating system, a program that is sometimes called a dispatcher juggles the computer processor's time
among different tasks and calls the loader when a program associated with a
task is not already in main storage. (By program here, we mean a binary file that is the result of a
programming language compilation, linkage editing, or some other program
preparation process.)
FUNCTION OF OPERATING SYSTEM:
1. Booting: Booting is a process of starting the computer operating system starts the computer to work. It checks the computer and makes it ready to work.
2. Memory Management:It is also an important function of operating system. The memory cannot be managed without operating system. Different programs and data execute in memory at one time. if there is no operating system, the programs may mix with each other.
3. Loading and Execution:A program is loaded in the memory before it can be executed. Operating system provides the facility to load programs in memory easily and then execute it.
4. Data security:Data is an important part of computer system. The operating system protects the data stored on the computer from illegal use, modification or deletion.
5. Disk Management:Operating system manages the disk space. It manages the stored files and folders in a proper way.
6. Process Management:CPU can perform one task at one time. if there are many tasks, operating system decides which task should get the CPU.
7. Device Controlling:operating system also controls all devices attached to computer. The hardware devices are controlled with the help of small software called device drivers.
DOS
OPERATION
The
term DOS can
refer to any operating system,
but it is most often used as shorthand forms. Originally developed by Microsoft for IBM, MS-DOS was the standard operating system for IBM-compatible personal computers.
The
initial versions of DOS were very simple and resembled another operating system
called CP/M. Subsequent versions have become
increasingly sophisticated as they incorporated features of minicomputer operating systems. However, DOS is
still a 16-bit operating system and does not support multiple users or multitasking.
Any instruction given to the
computer to perform a specific task is called command. The DOS has several
commands, each for a particular task and these are stored in DOS directory on
the disk. The commands are of two types : (a)Internal Commands : These are in
built commands of MS-DOS i.e. these are stored in Command interpreter file
(COMMAND.COM). These commands reside in the memory as long as the machine is at
he system prompt(C:\>) level. To use these commands no extra /external file
is required. E.g. DATE, TIME, DIR, VER etc. (b) External commands: These are
separate program (.com) files that reside in DOS directory and when executed
behave like commands. An external command has predefined syntax. for e.g. HELP,
DOSKEY, BACKUP, RESTORE, FORMAT etc.
BASIC DOS COMMANDS :
a) Directory Commands :
DIR : To list all or specific
files of any directory on a specified disk.
MD : To make directory or
subdirectory on a specified disk/drive.
CD or CHDIR : Change DOS current
working directory to specified directory on specified disk or to check for the
current directory on the specified or default drive.
RMDIR or RD : Removes a specified
sub-directory only when it is empty. This command cannot remove root directory
(C:\) or current working directory.
TREE : Displays all of the
directory paths found on the specified drive.
PATH : Sets a sequential search
path for the executable files, if the same are not available in the current
directory.
SUBST : Substitutes a string
alias for the pathname and creates a virtual drive.
b) File Management Commands:
COPY: Copies one or more files
from source disk/drive to the specified disk/drive.
XCOPY: Copies files and
directories, including lower-level directories if they exists.
DEL : Removes specified files from
specified disk/drive.
REN : Changes the name of a
file(Renaming).
ATTRIB : Sets or shows file
attributes (read, write, hidden, Archive).
Operating system virtualization (OS virtualization) is a server virtualization technology that involves tailoring a standard operating system so that it can run different applications handled by multiple users on a single computer at a time. The operating systems do not interfere with each other even though they are on the same computer.
In OS virtualization, the operating system is altered so that it operates like several different, individual systems. The virtualized environment accepts commands from different users running different applications on the same machine. The users and their requests are handled separately by the virtualized operating system.
Operating system virtualization provides application-transparent virtualization to users by decoupling applications from the OS. The OS virtualization technique offers granular control at the application level by facilitating the transparent migration of individual applications. The finer granularity migration offers greater flexibility, resulting in reduced overhead.
OS virtualization can also be used to migrate critical applications to another running operating system instance. Patches and updates to the underlying operating system are done in a timely way, and have little or no impact on the availability of application services. The processes in the OS virtualized environment are isolated and their interactions with the underlying OS instance are monitored.
TYPES OF VIRTUALIZATION
There
are six areas of IT where virtualization is making headway:
Network
virtualization is a method of combining the
available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others and can be
assigned -- or reassigned -- to a particular server or device in real
time. The idea is that virtualization
disguises the true complexity of the network by separating it into manageable
parts, much like your partitioned hard drive makes it easier to manage your
files.
Storage virtualization is the pooling of
physical storage from multiple network storage devices into what appears to be
a single storage device that is managed from a central console. Storage
virtualization is commonly used in storage area networks.
Server virtualization is the masking of
server resources -- including the number and identity of individual physical
servers, processors and operating systems -- from server users. The intention
is to spare the user from having to understand and manage complicated details
of server resources while increasing resource sharing and utilization and
maintaining the capacity to expand later.
Data virtualization is abstracting
the traditional technical details of data and data management, such as
location, performance or format, in favor of broader access and more resiliency
tied to business needs.
Desktop virtualization is virtualizing a
workstation load rather than a server. This allows the user to access the
desktop remotely, typically using a thin client at the
desk. Since the workstation is essentially running in a data center
server, access to it can be both more secure and portable. The operating
system license does still need to be accounted for as well as the
infrastructure.
Application virtualization is abstracting
the application layer away from the operating system. This way the
application can run in an encapsulated form without being depended upon on the
operating system underneath. This can allow a Windows application to run
on Linux and vice versa, in addition to adding a level of isolation.
Virtualization can be viewed as part of
an overall trend in enterprise IT that includes autonomic computing, a scenario in
which the IT environment will be able to manage itself based on perceived
activity, and utility computing, in which
computer processing power is seen as a utility that clients can pay for only as
needed. The usual goal of virtualization is to centralize administrative tasks
while improving scalability and
workloads.
Why Virtualize?
Reasons for Virtualization
1.Hardware Economy 2. Versatility 3.Security
4.Environment Specialization 5.Safe Kernel Development

No comments:
Post a Comment