From 62191b2b2fbda673d23675e3cc9bfb5e1991f584 Mon Sep 17 00:00:00 2001 From: lkcl Date: Sat, 7 May 2022 11:25:05 +0100 Subject: [PATCH] --- openpower/sv/SimpleV_rationale.mdwn | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/openpower/sv/SimpleV_rationale.mdwn b/openpower/sv/SimpleV_rationale.mdwn index 0d5439635..d4e804ec0 100644 --- a/openpower/sv/SimpleV_rationale.mdwn +++ b/openpower/sv/SimpleV_rationale.mdwn @@ -11,6 +11,8 @@ # Why in the 2020s would you invent a new Vector ISA +*The short answer: you don't. Leverage and exploit existing technology* + Inventing a new Scalar ISA from scratch is over a decade-long task including simulators and compilers: OpenRISC 1200 took 12 years to mature. A Vector or Packed SIMD ISA to reach stable *general-purpose* @@ -49,15 +51,14 @@ directly integrated into the memory have traditionally not gone well: Aspex Microelectronics, Elixent, these are parallel processing companies that very few have heard of, because their software stack was so specialist that it required heavy investment by customers to utilise. -D-Matrix and Graphcore are a modern incarnation of the exact same +D-Matrix, a Systolic Array Processor, is a modern incarnation of the exact same "specialist parallel processing" mistake, betting heavily on AI with Matrix and Convolution Engines that can do no other task. Aspex only survived by being bought by Ericsson, where its specialised suitability for massive wide Baseband FFTs saved it from going under. The huge risk is that any "better AI mousetrap" created by an innovative competitor -that comes along will quickly render both D-Matrix and -Graphcore's approach obsolete. +that comes along will quickly render the D-Matrix approach obsolete. NVIDIA and other GPUs have taken a different approach again: massive parallelism with more Turing-complete ISAs in each, and dedicated -- 2.30.2