record conversation snippet
[libreriscv.git] / 3d_gpu / microarchitecture.mdwn
1 # High-level architectural Requirements
2
3 * SMP Cache coherency (TileLink?)
4 * Minumum 800mhz
5 * Minimum 2-core SMP, more likely 4-core uniform design,
6 each core with full 4-wide SIMD-style predicated ALUs
7 * 6GFLOPS single-precision FP
8 * 128 64-bit FP and 128 64-bit INT register files
9 * RV64GC compliance for running full GNU/Linux-based OS
10 * SimpleV compliance
11 * xBitManip (required for VPU and ideal for predication)
12 * 4-lane 2Rx1W SRAMs for registers numbered 32 and above;
13 Multi-R x Multi-W for registers 1-31.
14 TODO: consider 2R for registers to be used as predication targets
15 if >= 32.
16 * Idea: generic implementation of ports on register file so as to be able
17 to experiment with different arrangements.
18 * Potentially: Lane-swapping / crossing / data-multiplexing
19 bus on register data (particularly because of SHAPE-REMAP (1D/2D/3D)
20 * Potentially: Registers subdivided into 16-bit, to match
21 elwidth down to 16-bit (for FP16). 8-bit elwidth only
22 goes down as far as twin-SIMD (with predication). This
23 requires registers to have extra hidden bits: register
24 x30 is now "x30:0+x30.1+x30.2+x30.3". have to discuss.
25
26 # Conversation Notes
27
28 ----
29
30 'm thinking about using tilelink (or something similar) internally as
31 having a cache-coherent protocol is required for implementing Vulkan
32 (unless you want to turn off the cache for the GPU memory, which I
33 don't think is a good idea), axi is not a cache-coherent protocol,
34 and tilelink already has atomic rmw operations built into the protocol.
35 We can use an axi to tilelink bridge to interface with the memory.
36
37 I'm thinking we will want to have a dual-core GPU since a single
38 core with 4xSIMD is too slow to achieve 6GFLOPS with a reasonable
39 clock speed. Additionally, that allows us to use an 800MHz core clock
40 instead of the 1.6GHz we would otherwise need, allowing us to lower the
41 core voltage and save power, since the power used is proportional to
42 F\*V^2. (just guessing on clock speeds.)
43
44 ----
45
46 I don't know about power, however I have done some research and a 4Kbyte
47 (or 16, icr) SRAM (what I was thinking of for a tile buffer) takes in the
48 ballpark of 1000 um^2 in 28nm.
49 Using a 4xFMA with a banked register file where the bank is selected by the
50 lower order register number means we could probably get away with 1Rx1W
51 SRAM as the backing memory for the register file, similarly to Hwacha. I
52 would suggest 8 banks allowing us to do more in parallel since we could run
53 other units in parallel with a 4xFMA. 8 banks would also allow us to clock
54 gate the SRAM banks that are not in use for the current clock cycle
55 allowing us to save more power. Note that the 4xFMA could be 4 separately
56 allocated FMA units, it doesn't have to be SIMD style. If we have enough hw
57 parallelism, we can under-volt and under-clock the GPU cores allowing for a
58 more efficient GPU. If we are using the GPU cores as CPU cores as well, I
59 think it would be important to be able to use a faster clock speed when not
60 using the extended registers (similar to how Intel processors use a lower
61 clock rate when AVX512 is in use) so that scalar code is not slowed down
62 too much.
63
64 > > Using a 4xFMA with a banked register file where the bank is selected by
65 > the
66 > > lower order register number means we could probably get away with 1Rx1W
67 > > SRAM as the backing memory for the register file, similarly to Hwacha.
68 >
69 > okaaay.... sooo... we make an assumption that the top higher "banks"
70 > are pretty much always going to be "vectorised", such that, actually,
71 > they genuinely don't need to be 6R-4W (or whatever).
72 >
73 Yeah pretty much, though I had meant the bank number comes from the
74 least-significant bits of the 7-bit register number.
75
76 ----
77
78 Assuming 64-bit operands:
79 If you could organize 2 SRAM macros and use the pair of them to
80 read/write 4 registers at a time (256-bits). The pipeline will allow you to
81 dedicate 3 cycles for reading and 1 cycle for writing (4 registers each).
82
83 <pre>
84 RS1 = Read of operand S1
85 WRd = Write of result Dst
86 FMx = Floating Point Multiplier, x = stage.
87
88 |RS1|RS2|RS3|FWD|FM1|FM2|FM3|FM4|
89 |FWD|FM1|FM2|FM3|FM4|
90 |FWD|FM1|FM2|FM3|FM4|
91 |FWD|FM1|FM2|FM3|FM4|WRd|
92 |RS1|RS2|RS3|FWD|FM1|FM2|FM3|FM4|
93 |FWD|FM1|FM2|FM3|FM4|
94 |FWD|FM1|FM2|FM3|FM4|
95 |FWD|FM1|FM2|FM3|FM4|WRd|
96 |RS1|RS2|RS3|FWD|FM1|FM2|FM3|FM4|
97 |FWD|FM1|FM2|FM3|FM4|
98 |FWD|FM1|FM2|FM3|FM4|
99 |FWD|FM1|FM2|FM3|FM4|WRd|
100 </pre>
101
102 The only trick is getting the read and write dedicated on different clocks.
103 When the RS3 operand is not needed (60% of the time) you can use
104 the time slot for reading or writing on behalf of memory refs; STs read,
105 LDs write.
106
107 You will find doing VRFs a lot more compact this way. In GPU land we
108 called the flip-flops orchestrating the timing "collectors".
109
110 ----
111
112 For GPU workloads FP64 is not common so I think having 1 FP64 alu would
113 be sufficient. Since indexed loads and stores are not supported, it will
114 be important to support 4x64 integer operations to generate addresses
115 for loads/stores.
116
117 I was thinking we would use scoreboarding to keep track of operations
118 and dependencies since it doesn't need a cam per alu. We should be able
119 to design it to forward past the register file to allow for 0-latency
120 forwarding. If we combined that with register renaming it should prevent
121 most war and waw data hazards.
122
123 I think branch prediction will be essential if only to fetch and decode
124 operations since it will reduce the branch penalty substantially.
125
126 Note that even if we have a zero-overhead loop extension, branch
127 prediction will still be useful as we will want to be able to run code
128 like compilers and standard RV code with decent performance. Additionally,
129 quite a few shaders have branching in their internal loops so
130 zero-overhead loops won't be able to fix all the branching problems.
131
132 ----
133
134 > you would need a 4-wide cdb anyway, since that's the performance we're
135 > trying for.
136
137 if the 32-bit ops can be grouped as 2x SIMD to a 64-bit-wide ALU,
138 then only 2 such ALUs would be needed to give 4x 32-bit FP per cycle
139 per core, which means only a 2-wide CDB, a heck of a lot better than
140 4.
141
142 oh: i thought of another way to cut the power-impact of the Reorder
143 Buffer CAMs: a simple bit-field (a single-bit 2RWW memory, of address
144 length equal to the number of registers, 2 is because of 2-issue).
145
146 the CAM of a ROB is on the instruction destination register. key:
147 ROBnum, value: instr-dest-reg. if you have a bitfleid that says "this
148 destreg has no ROB tag", it's dead-easy to check that bitfield, first.
149
150 ----
151
152 Avoiding Memory Hazards
153
154 * WAR and WAR hazards through memory are eliminated with speculation
155 because actual updating of memory occurs in order, when a store is at
156 the head of the ROB, and hence, no earlier loads or stores can still
157 be pending
158 * RAW hazards are maintained by two restrictions:
159 1. not allowing a load to initiate the second step of its execution if
160 any active ROB entry occupied by a store has a destination
161 field that matches the value of the A field of the load and
162 2. maintaining the program order for the computation of an effective
163 address of a load with respect to all earlier stores
164 * These restrictions ensure that any load that access a memory location
165 written to by an earlier store cannot perform the memory access until
166 the store has written the data.
167
168 Advantages of Speculation, Load and Store hazards:
169
170 * A store updates memoryy only when it reached the head of the ROB
171 * WAW and WAR type of hazards are eliminated with speculation
172 (actual updating of memory occurs in order)
173 * RAW hazards through memory are maintained by not allowing a load
174 to initiate the second step of its execution
175 * Check if any store has a destination field that matched the
176 value of the load:
177 - SD F1 100(R2)
178 - LD F2 100(R2)
179
180 Exceptions
181
182 * Exceptions are handled by not recognising the exception until
183 instruction that caused it is ready to commit in ROB (reaches head
184 of ROB)
185
186 Reorder Buffer
187
188 * Results of an instruction become visible externally when it leaves
189 the ROB
190 - Registers updated
191 - Memory updated
192
193 Reorder Buffer Entry
194
195 * Instruction type
196 - branch (no destination resutl)
197 - store (has a memory address destination)
198 - register operation (ALU operation or load, which has reg dests)
199 * Destination
200 - register number (for loads and ALU ops) or
201 - memory address (for stores) where the result should be written
202 * Value
203 - value of instruction result, pending a commit
204 * Ready
205 - indicates that the instruction has completed execution: value is ready
206
207 # References
208
209 * <https://en.wikipedia.org/wiki/Tomasulo_algorithm>
210 * <https://en.wikipedia.org/wiki/Reservation_station>
211 * <https://en.wikipedia.org/wiki/Register_renaming> points out that
212 reservation stations take a *lot* of power.
213 * MESI cache protocol, python <https://github.com/sunkarapk/mesi-cache.git>
214 <https://github.com/afwolfe/mesi-simulator>
215 * <https://kshitizdange.github.io/418CacheSim/final-report> report on
216 types of caches
217 * <https://github.com/ssc3?tab=repositories> interesting stuff
218 * <https://en.wikipedia.org/wiki/Classic_RISC_pipeline#Solution_A._Bypassing>
219 pipeline bypassing
220 * <http://ece-research.unm.edu/jimp/611/slides/chap4_7.html> Tomasulo / Reorder
221 * Register File Bank Cacheing <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>
222 * Discussion <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2018-November/000157.html>
223 * <https://github.com/UCSBarchlab/PyRTL/blob/master/examples/example5-instrospection.py>
224 * <https://github.com/ataradov/riscv/blob/master/rtl/riscv_core.v#L210>