Parallel computing is a type of computation in which multiple calculations or processes are carried out simultaneously. It is used to speed up large problems by dividing them into smaller ones that can be solved at the same time. Parallel computers can be classified according to the level of hardware support for parallelism, ranging from multi-core processors to clusters and grids.
Princeton University
Fall 2019
This course offers an in-depth understanding of modern computer processor and system architecture. It covers topics like instruction-set architecture, processor organization, cache, memory, multiprocessors, and more. Designed for senior-level undergraduates and first-year graduate students.
No concepts data
+ 19 more concepts