Distributed memory

Definition
Distributed memory refers to a computer architecture in which each processing node possesses its own private memory module that is not directly accessible by other nodes. Communication between nodes occurs explicitly through message‑passing mechanisms rather than through a shared address space.

Overview
In distributed‑memory systems, the global memory of the computer is the logical aggregation of the individual local memories of all nodes. The architecture is typical of large‑scale parallel computers, clusters, and supercomputers. Programs executed on such systems must be written to coordinate data exchange, often using standardized libraries such as the Message Passing Interface (MPI). Unlike shared‑memory systems, where any processor can read or write any location in a common memory pool, distributed memory eliminates contention for a single memory controller and scales more readily to a high number of processing elements.

Etymology/Origin
The term combines “distributed,” meaning spread across multiple locations, with “memory,” indicating the storage component of a computer. The concept emerged in the 1970s and 1980s alongside the development of parallel processing research, particularly in the context of multiprocessor machines that could not feasibly share a single physical memory due to speed and capacity limitations.

Characteristics

Characteristic Description
Local Memory Each node holds its own RAM, typically attached directly to its CPU.
Explicit Communication Data transfer between nodes requires explicit send/receive operations or collective communication primitives.
Scalability Adding more nodes increases total memory and processing capacity linearly, limited primarily by network bandwidth and latency.
Fault Isolation Failure of one node’s memory does not directly corrupt the memory of other nodes, aiding reliability.
Programming Model Developers employ message‑passing APIs (e.g., MPI, PVM) or higher‑level abstractions such as Partitioned Global Address Space (PGAS) languages to manage data distribution.
Network Dependence Performance is heavily influenced by the interconnect topology (e.g., torus, fat‑tree) and the characteristics of the networking hardware.
Cache Coherence Not required across nodes, simplifying hardware design compared to cache‑coherent shared‑memory multiprocessors.

Related Topics

  • Shared memory architecture – A contrasting model where multiple processors access a common physical memory space.
  • Message Passing Interface (MPI) – The predominant standardized library for implementing communication in distributed‑memory systems.
  • Cluster computing – The use of multiple stand‑alone computers networked together, typically employing a distributed‑memory model.
  • Supercomputing – High‑performance computing platforms that often combine distributed memory with high‑speed interconnects.
  • Partitioned Global Address Space (PGAS) – A programming model that provides a global address space while preserving the physical distribution of memory.
  • Network topology – The arrangement of interconnections among nodes, influencing latency and bandwidth in distributed systems.
Browse

More topics to explore