From b3fdf045721cb49dfeb3d40c378cfc9e51852f84 Mon Sep 17 00:00:00 2001 From: lkcl Date: Fri, 9 Aug 2019 09:15:42 +0100 Subject: [PATCH] --- zfpacc_proposal.mdwn | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/zfpacc_proposal.mdwn b/zfpacc_proposal.mdwn index 9deb462c1..114963d0d 100644 --- a/zfpacc_proposal.mdwn +++ b/zfpacc_proposal.mdwn @@ -1,6 +1,7 @@ # FP Accuracy proposal TODO: complete writeup + * * @@ -68,3 +69,39 @@ TODO: reduced accuracy sequences), while allowing portable code to execute discovery sequences to detect support for alternative accuracy modes. +# Dynamic accuracy CSR + +maybe a solution would be to add an extra field to the fp control csr +to allow selecting one of several accurate or fast modes: + +- machine-learning-mode: fast as possible + (maybe need additional requirements such as monotonicity for atanh?) +- GPU-mode: accurate to within a few ULP + (see Vulkan, OpenGL, and OpenCL specs for accuracy guidelines) +- almost-accurate-mode: accurate to <1 ULP + (would 0.51 or some other value be better?) +- fully-accurate-mode: correctly rounded in all cases +- maybe more modes? + +Question: should better accuracy than is requested be permitted? Example: +Ahmdahl-370 issues. + +Comments: + + Yes, embedded systems typically can do with 12, 16 or 32 bit + accuracy. Rarely does it require 64 bits. But the idea of making + a low power 32 bit FPU/DSP that can accommodate 64 bits is already + being done in other designs such as PIC etc I believe. For embedded + graphics 16 bit is more than adequate. In fact, Cornell had a very + innovative 18-bit floating point format described here (useful for + FPGA designs with 18-bit DSPs): + + + + A very interesting GPU using the 18-bit FPU is also described here: + + + + There are also 8 and 9-bit floating point formats that could be useful + + -- 2.30.2