Terminologies in a Multithreading Environment in Software World

Prateek Nima
3 min readMar 3, 2023
Photo by Amr Taha™ on Unsplash

The following are a few of the important terminologies you should be aware of in a multithreaded environment.

Difference between Program vs Process vs Thread

  • A program is a set of instructions and associated data that sits on the disk and is loaded by the operating system to perform some tasks.
  • A process belongs to a program. It is a program in execution. A process is an execution environment that consists of a set of instructions, user data, and other resources such as CPU, memory, address space, disk, and network I/O acquired at runtime.
  • Thread is the smallest unit of execution in a process. A thread simply executes instructions serially. A process can have multiple threads running as a part of it. Each thread would have a state private to it. The global state shared by the process is accessible by all the threads and the programmer should pay extra attention when a thread tries to access this global state.

Serial Execution vs Concurrency vs Parallelism

  • When a program is serially executed, they are scheduled one at a time.
  • Concurrency is when it seems that the programs are executing simultaneously in an overlapping timeframe but in reality, the context gets switched so quickly that it seems to execute in parallel to humans. One program can be suspended while the other executes and this helps to improve the overall throughput and minimize latency.
  • Parallelism is when in reality the programs are executed in parallel. Usually, the hardware comes into play in this case, and is when we use multicore processor systems or in the case of computing clusters where multiple/several machines execute the independent pieces of a program.

I/O bound vs CPU bound

A program in execution utilizes your system resources such as CPU time, Disk storage, Networking resources, Memory, etc. The usage of resources is completely dependent on the type of program and is widely categorized into CPU-bound and I/O-bound programs. CPU-bound programs require powerful CPU resources for their execution. I/O bound operations are programs that are dependent on input-output operations and primarily consist of writing and reading from the main memory. The CPU-bound operations can be optimized by adding the CPU while the efficiency of I/O bound can be optimized by giving up the CPU when the I/O operations are occurring.

Synchronous vs Asynchronous execution

  • Synchronous execution means each line of the code is executed line by line and the next line cannot execute until the previous line execution has been completed.
  • In the case of asynchronous execution, the next line of the code can execute even if the previous line execution is not completed for e.g. a function call. Have you ever imagined how google drive enables us to carry out any tasks even while the file uploads are in progress?

Cooperative Multitasking vs Preemptive Multitasking

  • In Preemptive multitasking, the operating system scheduler is the big boss who decides which program to run and for how long it can run. The disadvantage of this is that the program gets switched it doesn’t know when it will get the CPU again.
  • In Cooperative multitasking, the program is responsible to give back the context so that other programs can execute. The disadvantage of this is if a malicious program takes the context, it would run indefinitely without returning the context.

Throughput & Latency

Throughput means the amount of work done per unit of time while latency is the overall time required to complete a task. As the throughput of an application goes up its latency goes down.

Critical Section, Race Condition, Deadlocks

A critical section is a part of code which have the chance of being executed concurrently by multiple threads. Whenever threads execute concurrently, each thread tries to complete its execution first, when this happens in the critical section of the code it leads to inconsistencies in the application. If thread A is accessing a resource R1 in the code and if there is thread B accessing a resource R2, and if both threads are waiting for the other resource to be freed up (thread A is waiting for resource R2 and thread B is waiting for resource R1) this leads to a condition known as Deadlock.

I hope you liked this article. Please feel free to comment with your thoughts below and follow me for more interesting articles.

Have a great day ahead :)

--

--