From 8e3c03f3b64075d83e00d95865895bc66774074a Mon Sep 17 00:00:00 2001 From: lkcl Date: Sat, 7 May 2022 10:45:47 +0100 Subject: [PATCH] --- openpower/sv/SimpleV_rationale.mdwn | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/openpower/sv/SimpleV_rationale.mdwn b/openpower/sv/SimpleV_rationale.mdwn index ea39c854a..0d5439635 100644 --- a/openpower/sv/SimpleV_rationale.mdwn +++ b/openpower/sv/SimpleV_rationale.mdwn @@ -19,8 +19,8 @@ history of computing, not with the combined resources of ARM, Intel, AMD, MIPS, Sun Microsystems, SGI, Cray, and many more. (*Hand-crafted assembler and direct use of intrinsics is the Industry-standard norm to achieve high-performance optimisation where it matters*). -Rather: GPUs -have ultra-specialist compilers (CUDA) that are designed from the ground up +GPUs full this void both in hardware and software terms by having +ultra-specialist compilers (CUDA) that are designed from the ground up to support Vector/SIMD parallelism, and associated standards (SPIR-V, Vulkan, OpenCL) managed by the Khronos Group, with multi-man-century development committment from @@ -30,7 +30,7 @@ Therefore it begs the question, why on earth would anyone consider this task, and what, in Computer Science, actually needs solving? First hints are that whilst memory bitcells have not increased in speed -since the 90s (around 150 mhz), increasing the bank width and +since the 90s (around 150 mhz), increasing the bank width, striping, and datapath widths and speeds to the same has allowed significant apparent speed increases: 3200 mhz DDR4 and even faster DDR5, and other advanced Memory interfaces such as HBM, Gen-Z, and OpenCAPI, -- 2.30.2