Concepts of Uni-programming, Multi-programming, and Parallel Programming

Filter Course


Concepts of Uni-programming, Multi-programming, and Parallel Programming

Published by: Zaya

Published date: 22 Jun 2021

Concepts of Uni-programming, Multi-programming, and Parallel Programming Photo

Concepts of Uni-programming, Multi-programming, and Parallel Programming

Uni-programming

Uni-programming means one program sits in the main memory at a time. Uni-programming was used in old computers and mobiles. When the computer starts the operating system and application programs are loaded into the main memory. We only count user programs running in RAM. RAM is also called main memory.

In old operating systems (OS) only one program runs on the computer at a time. Either of the browser, calculator, or word processor runs at a time. These types of operating systems in which one program runs at a time are known as Uni-programming operating systems.


Multi-programming

In multi-programming, multiple programs reside in the main memory (RAM) at a time. OS which handles multiple programs at a time is known as the multi-programming operating system. One processor or CPU can only run one process at a time. OS uses context switching in main memory for running multiple programs. Context switching is to switch programs so all programs are given a suitable amount of time. OS can handle only a limited number of programs. If we run many programs on the computer or mobile, then the computer becomes very slow or unresponsive.


Uni-programming vs Multi-programming

In Uni-programming only one program sits in the main memory so it has a small size. But in the case of
multi-programming main memory needs more space. The Uni-programming system runs smoothly as only
one task is run at a time. The slow processor can also work well in Uni-programming but in
multi-programming processor needs to be fast. In multi-programming large space of RAM is needed.
Fixed-size partition is used in Uni-programming. Both fixed and variable size partition can be used in
multi-programming systems.
Example of uni-programming

  • Batch processing in old computers and mobiles
  • The old operating system of computers
  • Old mobile operating system

Example of multi-programming

  •  Modern operating systems like Windows XP and Windows 7,8,10

Parallel Programming

In very simple terms, it is the use of multiple resources, in this case, processors, to solve a problem. This type of programming takes a problem, breaks it down into a series of smaller steps, delivers instructions, and processors execute the solutions at the same time. It is also a form of programming that offers the same results as concurrent programming but in less time and with more efficiency. Many computers, such as laptops and personal desktops, use this programming in their hardware to ensure that tasks are quickly completed in the
background.

Advantages
There are two major advantages to using this programming over concurrent programming. One is that all processes are sped up when using parallel structures, increasing both the efficiency and resources used in order to achieve quick results. Another benefit is that parallel computing is more cost-efficient than concurrent programming simply because it takes less time to get the same results. This is incredibly important; as parallel processes are necessary for accumulating massive amounts of data into data sets that can be easy to processor for solving complicated problems.

Disadvantages
There are several disadvantages to parallel processing. The first is that it can be difficult to learn; programming that targets parallel architectures can be overwhelming at first, so it does take time to fully understand. Additionally, code tweaking is not straightforward and must be modified for different target architectures to properly improve performance. It’s also hard to estimate consistent results because
communication of results can be problematic for certain architectures. Finally, power consumption is a problem for those instituting a multitude of processors for various architectures; a variety of cooling technologies will be required in order to cool the parallel clusters.