expand architectural requirements page
[libreriscv.git] / 3d_gpu / microarchitecture.mdwn
1 # High-level architectural Requirements
2
3 * SMP Cache coherency (TileLink?)
4 * Minumum 800mhz
5 * Minimum 2-core SMP, more likely 4-core uniform design,
6 each core with full 4-wide SIMD-style predicated ALUs
7 * 6GFLOPS single-precision FP
8 * 128 64-bit FP and 128 64-bit INT register files
9 * RV64GC compliance for running full GNU/Linux-based OS
10 * SimpleV compliance
11 * xBitManip (required for VPU and ideal for predication)
12 * 4-lane 1Rx1W SRAMs for registers numbered 32 and above;
13 Multi-R x Multi-W for registers 1-31.
14 TODO: consider 2R for registers to be used as predication targets
15 if >= 32.
16 * Potentially: Lane-swapping / crossing / data-multiplexing
17 bus on register data
18 * Potentially: Registers subdivided into 16-bit, to match
19 elwidth down to 16-bit (for FP16). 8-bit elwidth only
20 goes down as far as twin-SIMD (with predication). This
21 requires registers to have extra hidden bits: register
22 x30 is now "x30:0+x30.1+x30.2+x30.3". have to discuss.
23
24 # Conversation Notes
25
26 ----
27
28 'm thinking about using tilelink (or something similar) internally as
29 having a cache-coherent protocol is required for implementing Vulkan
30 (unless you want to turn off the cache for the GPU memory, which I
31 don't think is a good idea), axi is not a cache-coherent protocol,
32 and tilelink already has atomic rmw operations built into the protocol.
33 We can use an axi to tilelink bridge to interface with the memory.
34
35 I'm thinking we will want to have a dual-core GPU since a single
36 core with 4xSIMD is too slow to achieve 6GFLOPS with a reasonable
37 clock speed. Additionally, that allows us to use an 800MHz core clock
38 instead of the 1.6GHz we would otherwise need, allowing us to lower the
39 core voltage and save power, since the power used is proportional to
40 F\*V^2. (just guessing on clock speeds.)
41
42 ----
43
44 I don't know about power, however I have done some research and a 4Kbyte
45 (or 16, icr) SRAM (what I was thinking of for a tile buffer) takes in the
46 ballpark of 1000 um^2 in 28nm.
47 Using a 4xFMA with a banked register file where the bank is selected by the
48 lower order register number means we could probably get away with 1Rx1W
49 SRAM as the backing memory for the register file, similarly to Hwacha. I
50 would suggest 8 banks allowing us to do more in parallel since we could run
51 other units in parallel with a 4xFMA. 8 banks would also allow us to clock
52 gate the SRAM banks that are not in use for the current clock cycle
53 allowing us to save more power. Note that the 4xFMA could be 4 separately
54 allocated FMA units, it doesn't have to be SIMD style. If we have enough hw
55 parallelism, we can under-volt and under-clock the GPU cores allowing for a
56 more efficient GPU. If we are using the GPU cores as CPU cores as well, I
57 think it would be important to be able to use a faster clock speed when not
58 using the extended registers (similar to how Intel processors use a lower
59 clock rate when AVX512 is in use) so that scalar code is not slowed down
60 too much.
61
62 > > Using a 4xFMA with a banked register file where the bank is selected by
63 > the
64 > > lower order register number means we could probably get away with 1Rx1W
65 > > SRAM as the backing memory for the register file, similarly to Hwacha.
66 >
67 > okaaay.... sooo... we make an assumption that the top higher "banks"
68 > are pretty much always going to be "vectorised", such that, actually,
69 > they genuinely don't need to be 6R-4W (or whatever).
70 >
71 Yeah pretty much, though I had meant the bank number comes from the
72 least-significant bits of the 7-bit register number.