Concurrency Models #0: Introduction
- Concurrency Models #0: Introduction
- Concurrency Models #1: Parallel Algorithms
Concurrency is nothing new, but it becomes one of hottest topics among sub-disciplines of CS nowadays. It is also the key that unlocks responsiveness, fault tolerance, efficiency, and simplicity in a system. Next semester (Fall 2017) I will share some thoughts on concurrency models in Google Camp, and I plan to write a series of tutorials in my blog simultaneously. Related lecture slides can be found here.
This series of tutorials will focus mainly on concurrency programming theories and practices.
Code examples will be drawn from a number of programming languages. To make use of a specific concurrency framework, it requires a specific language. Because the theme is about concurrency, there are going to be some aspects of these languages that I will use without explaining in detail. In order to fully understand them, you may required to have basic knowledge of syntax and features of the following programming languages:
- Clojure (or Haskell)
- Elixir (preferred Erlang)
Besides, basic knowledge about OS is preferred as well.
Topics to be Covered
I am going to cover some approaches that are already mainstream in the concurrent world, and also reveal knowledge and theory behind them. To be specific, topics below will be discussed sequentially:
- Parallel Algorithms
- Asynchronous Programming
- Communicating Sequential Processes
- Functional Programming
- The Lambda Architecture
Concurrency: Beyond Multiple Cores
Let’s get down to business. The presence of concurrency models originates in what is so called “multicore crisis”. Multicore crisis is found of the fact that microprocessors have stopped getting faster every 18 months. Instead of gaining faster clock speed, Moore’s law continues to deliver more transistors per chip, and thus we have computers with increasingly cores.
The days of waiting for faster hardware is (long) gone. Therefore, to make software systems that perform efficiently, you need to incorporate concurrency into your system designs.
Concurrent or Parallel?
The most common question to ask is what is the difference between concurrency and parallelism. Actually, they refer to different things, though they are often used interchangeably.
When we execute a program, we create a process. A sequential program has a single thread of control, while a concurrent program has multiple threads of control. A single computer can have multiple processes running at once. If that machine has a single processor, then the illusion of multiple processes running at once is just that: an illusion. Thus, if a machine has more than a single processor, then true parallelism can occur: you can have \(N\) processes running simultaneously on a machine that has \(N\) processors.
- A concurrent program has multiple logical threads of control. These threads may or may not run in parallel.
- A parallel program potentially runs more quickly than a sequential program by executing different parts of the computation in parallel. It may or may not have more than one logical thread of control.
An alternative way of thinking about it is that concurrency is part of the problem domain—multiple events can happen at the same time. Parallelism, by contrast, is an aspect of the solution domain—to make programs faster, it is necessary to design a program such that computations occur simultaneously.
Features of Concurrency
Concurrency is great more than just exploiting parallelism. As I have said before, concurrency is the key that unlocks responsiveness, fault tolerance, efficiency, and simplicity in a system.
Concurrency is the key to responsive systems. Responsiveness becomes a more and more indispensable quantity of a software. Basically, it means that we do not block waiting for a time-consuming task, such as the I/O operation. In the server side, concurrency enables a web server handling requests from enormous clients, and guarantees that a single slow request does not hold up others. In the client side, the main/GUI thread of the process will not be blocked and should be free to handle user events.
The concept of efficiency for concurrency can be characterized both in terms of probability and in terms of time. A process is more efficient than others if it is able to perform certain computations with a greater probability or by taking a lesser time. Concurrency brings efficiency because it utilizes the computational resources of the system.
When we talk about efficiency, we are not just dealing with speed. Efficiency should also weigh the CPU and memory overhead and the cost to ensure data consistency. For example, if an application benefits from concurrency but require an elaborate and computationally expensive process to ensure data consistency, then the entire strategy should be evaluated again.
Concurrency enables resilient, or fault-tolerant, through independence and fault detection. Independence is important because a failure in one task should not be able to bring down another, and also a power outage at one site should not result in global downtime. And fault detection is critical so that when a task fails, a separate task is notified so that it can take remedial action. As comparison, sequential software can never be as resilient as concurrent software.
It is quite controversial for programmers that concurrency brings simplicity to programs, especially for those who suffer from debugging multithreaded programs. Generally, the sequential expression is considered to be simple and reliable, whereas the expression of concurrency is perceived to be complex, non-deterministic and difficult. However, the perception of the simplicity of sequentiality and the complexity of concurrency is an illusion. When written in the right language with the right tools, a concurrent solution is usually simpler and clearer than its sequential equivalent version.
Why Worry about Concurrency?
“Concurrency is hard and I have only ever needed single-threaded programs. Why should I care
about it?” A newbie always argues that. However, we cannot indulge in such fantasy.
With the frequent changing environment, multicore computers and clusters are on the rise to fulfill the needs. And growth rates for chip clock speed are flat, thus you can not wait for another 18 months for a twice speed-up anymore. Instead, the chips are going to be wider: more cores, wider memory bus, more memory. Under such circumstances, a single-threaded application is not going to see any significant performance gains from new devices. In contrast, software will only see performance gains if it is designed to do more work in parallel, as the number of available processors increase. So, the application’s computations must be amenable to parallelization. It must be possible to break its work into tasks that can run at the same time with no need to coordinate with each other.
Moreover, concurrent programming is becoming hard to ignore. Lots of application domains in which concurrency is the norm.
Drawbacks of Concurrency
While concurrency is widespread, it is error prone. Programmer strained for single-threaded programming face unfamiliar problems, such as synchronization, race conditions, deadlocks and “memory barriers”.
Within one single machine, there are many tiers of parallelism.
- Bit-Level Parallelism: The history of computing moves from 8- to 16-, 32-, and now 64-bit architectures. A 64-bit computer can process 32-bit numbers faster than a 16-bit computer.
- Instruction-Level Parallelism: Modern CPUs are highly parallel, using techniques like pipelining, out-of-order execution, and speculative execution. Those techniques will have a profound impact on the behaviors of programs.
- Data Parallelism: Data-parallel (SIMD, for “single instruction, multiple data”) architectures are capable of performing the same operations on a large quantity of data in parallel. For instance, GPUs can process lots of data in parallel with a single instruction.
- Task-Level Parallelism: The architecture of multiple processors lets multiple threads of control execute simultaneously.
Software Architecture Design Choices
When designing a modern application, we now have to ask:
- How many machines are involved?
- What software components will be deployed on each machine?
- For each component, does it need concurrency? If so, how will we achieve that concurrency? Via multiple threads, multiple processes or both?
Case Study: Google Chrome
Google Chrome is now a popular web browser, who has one process per tab (multi-process) and multiple threads handling loading of content within each tab (multi-threaded).
Some advantages of such a design involve stability, speed and security.
- Stability: Single-process, multi-threaded browsers are vulnerable to having a crash in one tab bring down the entire browser.
- Speed: Multi-process browsers can be more responsive due to OS support.
- Security: Exploits in single-process browsers are easier if malware loaded in one tab can grab information contained in another tab. It is much harder to grab information across processes.
- Paul Butcher: Seven Concurrency Models in Seven Weeks. The Pragmatic Programmers, LLC. 2014.7.
Slides accompanying with this section can be found here.