Concurrency and parallelism are properties of the resolution with which you view your program. View a program at the distributed system level, vs the OS-scheduled level, vs the microarchitectural superscalar / OoO / SIMD level, and you will come to different conclusions about whether the program is "parallel" and/or "concurrent." These are contextual.
To be clear up-front: I agree with you. However, this line of thinking can be a bit disingenuous as it may turn into convincing someone their single threaded program in C is actually multithreaded because of OS scheduling or because it runs on a distributed k8s cluster.
It reminds me of how technically there is space between everything if you go small enough (ie cellular or atomic) and how some elementary school kids would use that to convince you that they arent touching you :)
Concurrency = logical simultaneity.
Parallelism = physical simultaneity.
When the former is mapped to the latter we encounter scalability. Some examples:
Unix commands (cp, grep, awk, sort) do not scale on multi-core systems due to total lack of concurrency in their implementation.
Quick sort scales worse than merge sort because the latter allows for more concurrency.
A parallel program using a barrier between outer loop iterations (f.e. matrix inversion, or computational partial differential equations) won't scale well because barriers are antithetical to concurrency.
>Unix commands (cp, grep, awk, sort) do not scale on multi-core systems due to total lack of concurrency in their implementation.
This is not true. Unix commands are designed to be used in pipelines. Unix is actually the easiest way to run things in parallel, so easy that people don't even notice.
I think what he means is that he's defining 'concurrent' as a sort of abstract parallelism which can run on 'n' CPU cores. It becomes concrete parallelization when you set n to 2 or more but can still be considered concurrent software architecture even if someone only ever runs their copy with n=1.
To address another comment, I agree this distinction has limited utility (assuming I understand it correctly). This context of discussing architectures which promote abstract parallelism is probably about the only place it's useful.
I agree that a strict/true interpretation of 'concurrency' (as distinct from Pike's definition) would not include time slice multi-tasking since it depends on the limits of human perception to appear as truly-concurrent/executed-in-parallel when it's actually fine-grained sequential processing.
The difference is instantaneously (parallelism) vs over a period of time (concurrency).
Edit: compare a batch system (no concurrency and no parallelism), Vs a time sharing system on an uniprocessor (concurrency) Vs a time sharing system on a multiprocessor (parallelism).
There are several misleading comments here making the incorrect assumption that the term “concurrency” in software has something to do with time. In software design, when we talk about concurrency, we are talking about coordinating distinct logical processes (not necessarily OS processes). This term has limited similarity to the common-knowledge understanding of concurrent events “happening at the same time”. Logical (not temporal) ordering of operations is implemented using process synchronisation techniques. One of the important differences between parallelism and concurrency is that parallelism is typically restricted to almost identical process definitions which aim to exploit the physical architecture of processing units. Concurrency, however, makes fewer assumptions about the physical architecture and focuses on software design in a more abstract way with a primary objective of coordinating resource utilisation of competing processes (especially physical resources).
[+] [-] cultus|7 years ago|reply
https://typelevel.org/cats-effect/concurrency/basics.html
[+] [-] yvdriess|7 years ago|reply
"Parallelism is for performance, Concurrency is for correctness."
[+] [-] remify|7 years ago|reply
[+] [-] millstone|7 years ago|reply
[+] [-] jzoch|7 years ago|reply
It reminds me of how technically there is space between everything if you go small enough (ie cellular or atomic) and how some elementary school kids would use that to convince you that they arent touching you :)
[+] [-] turingspiritfly|7 years ago|reply
[+] [-] omazurov|7 years ago|reply
[+] [-] pmarin|7 years ago|reply
This is not true. Unix commands are designed to be used in pipelines. Unix is actually the easiest way to run things in parallel, so easy that people don't even notice.
[+] [-] CoconutPilot|7 years ago|reply
[+] [-] steamer25|7 years ago|reply
To address another comment, I agree this distinction has limited utility (assuming I understand it correctly). This context of discussing architectures which promote abstract parallelism is probably about the only place it's useful.
I agree that a strict/true interpretation of 'concurrency' (as distinct from Pike's definition) would not include time slice multi-tasking since it depends on the limits of human perception to appear as truly-concurrent/executed-in-parallel when it's actually fine-grained sequential processing.
[+] [-] gpderetta|7 years ago|reply
Edit: compare a batch system (no concurrency and no parallelism), Vs a time sharing system on an uniprocessor (concurrency) Vs a time sharing system on a multiprocessor (parallelism).
[+] [-] agumonkey|7 years ago|reply
[+] [-] jmct|7 years ago|reply
The other angle is that concurrent programs are still concurrent on a uni-processor. Parallel programs are not.
[+] [-] bottled_poe|7 years ago|reply
[+] [-] ejanus|7 years ago|reply
[+] [-] justin_vanw|7 years ago|reply
[+] [-] jjtheblunt|7 years ago|reply
[+] [-] Thaxll|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] realone|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] sitkack|7 years ago|reply
[deleted]