Vega machine
The term "Vega machine" typically refers to a specific type of vector processor or massively parallel processing (MPP) system, most notably associated with early supercomputers developed by Thinking Machines Corporation. While not a formal architectural designation enshrined in academic literature, "Vega machine" has become shorthand to describe systems conceptually similar to, or inspired by, the Connection Machine series, particularly the CM-2, which used custom chips code-named "Vega."
The core characteristics of a Vega machine include:
-
Massive Parallelism: A large number of processing elements (PEs) are interconnected, enabling simultaneous execution of operations across a dataset. The CM-2, for instance, could have up to 65,536 PEs.
-
Bit-Serial Processing: Individual PEs often operate in a bit-serial fashion, performing arithmetic operations on one bit at a time. This design allows for high PE density and reduced chip complexity compared to full-width parallel processing within each PE.
-
Data Parallelism: The programming model typically emphasizes data parallelism, where the same operation is applied to multiple data elements concurrently.
-
Interconnection Network: A sophisticated interconnection network, such as a hypercube network (as used in the Connection Machine), allows for efficient communication between PEs. The topology and bandwidth of this network are crucial for performance.
-
Front-End System: A separate, more conventional computer (often a workstation or mainframe) serves as the front-end system. The front-end handles program compilation, input/output, and overall system management, delegating the computationally intensive tasks to the parallel array of PEs.
While the original Thinking Machines Connection Machine represents the archetype of a Vega machine, the term has been used more loosely to describe other MPP systems with similar architectural principles, particularly those emphasizing data parallelism and fine-grained parallel processing using a large number of simple processing elements. The term is often used in historical contexts when discussing the evolution of parallel computing.