Multithreading allows to execute code concurrently. For example, an application can load some data in the background and at the same time allow the user to work with windows without problems. Java is the first language to support multithreading out of the box.


Process is a self-contained executable environment. In other words, the processes work independently of each other.

Each process works in isolation. One process does not have access to the data of another process.

Resources, memory and CPU time for a process are allocated by the OS. Most applications use only one process. But the application can spawn another process.


In most systems threads are a components of a process that execute some code.

Every process has at least one thread - main thread. Threads in the same process share global data and address space. Therefore, creating a new thread requires much less resources than creating a new process.

Systems with a single processor generally implement multithreading by time slicing: the central processing unit (CPU) switches between different software threads. This context switching generally happens very often and rapidly enough that users perceive the threads or tasks as running in parallel.

On a multiprocessor or multi-core system, multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads, separate software threads can also be executed concurrently by separate hardware threads.

daemon thread

Daemon thread is a special kind of thread in Java and some other languages. They cannot prevent the JVM from exiting when all user threads have finished executing. On the other hand, the JVM terminates them itself when all user threads are completed.

Daemon thread is a low priority thread that runs in background to perform tasks such as garbage collection.

In C# daemon thread is equal to the background thread, and user thread to the foreground thread.

atomic actions

Atomic action is operation or task that is guaranteed to be completed to the end. For example, assigning integer value to the variable in Java is atomic operation. But the increment operation is non-atomic. It can be stopped by switching to another thread, that can modify variable before end of operation. You should use synchronization tools to make a section of code atomically.


Synchronization allows you to give a thread exclusive access to the object.

Mutex (mutually exclusive) is a special object for thread/process synchronization. It can be "busy" or "free". For example, such a mutex is attached to every object in Java. But only JVM has direct access to the mutex.

Monitor is a mechanism to control exclusive access to an object. It specifies a monitored object aka object of synchronization and a synchronized block aka monitored/lock block. A monitored block specifies a critical section of code that must be executed atomically.

// In Java a monitor is implemented as synchronized block
    //... doing task 


What's going on here? Before the synchronized block, the Java compiler adds code that checks is the mutex free or busy. If it is busy, the current thread will wait for the mutex to become free. Further it marks mutex as busy at begin of block and marks as free at end of block.


The semaphore is object to control count of threads that can access to the resource at the same time. For example, if the disk for reading files is accessed by too many threads, the overall performance will decrease.

Binary semaphore has two value 1 and 0. It can be used as mutex, but don't confuse it with mutex.

mutex vs binary semaphore

  1. A mutex is based on a locking mechanism and a semaphore is based on a signaling mechanism.
  2. Mutex will be released only when thread that acquired mutex will execute a critical section of code. But semaphore can be released by other thread that have higher priority than current thread. This also means that a semaphore is faster than a mutex.
  3. Only one thread can acquire mutex at a time. And multiple number of threads can acquire semaphore at a time concurrently.
  4. Thread that acquired mutex is considered as the owner. But semaphore does not have owners.

In conclusion, if you have number of instances for resource it is better to use binary semaphore. If you have single instance for resource it is better to use mutex.

data visibility across threads

Thread can cache data from main memory. This means that one variable can have different values in different threads and in main memory. In other words, threads don't know the actual value of the variable.

Synchronized blocks guarantee that all variables accessed inside the synchronized block will be read in from main memory. And when the thread exits the synchronized block, all updated variables will be flushed back to main memory.

Sometimes will be prefer that variable will be always visible across threads. In Java this can be achivied by the volatile keyword. It is also guarantees that write and read operations will be always atomic even for double and long types. But not other operations like increment.


  1. More efficient use of resources.
  2. Sometimes simplify the structure of the application. For example, when you implement the producer/consumer pattern on different threads.
  3. The application looks more lively, the user does not wait every time to complete tasks.


  1. Difficult debugging.
  2. You must manage threads carefully. Otherwise, you may get one of multithreading problems.