Benjamin Herrenschmidt [Wed, 16 Oct 2019 23:21:41 +0000 (10:21 +1100)]
Reduce wishbone address size to 32-bit
For now ... it reduces the routing pressure on the FPGA
This needs manual adjustment of the address decoder in soc.vhdl, at
least until I can figure out how to deal with std_match
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
# Conflicts:
# soc.vhdl
# Conflicts:
# soc.vhdl
Benjamin Herrenschmidt [Wed, 25 Sep 2019 06:54:25 +0000 (16:54 +1000)]
Make it possible to change wishbone address size
All that needs to be changed now is the size in wishbone_types.vhdl
and the address decoder in soc.vhdl
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Fri, 18 Oct 2019 23:31:39 +0000 (10:31 +1100)]
dcache: Add testbench
A very simple one for now...
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Tue, 22 Oct 2019 03:56:31 +0000 (14:56 +1100)]
insn: Simplistic implementation of icbi
We don't yet have a proper snooper for the icache, so for now make
icbi just flush the whole thing
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Tue, 22 Oct 2019 03:49:35 +0000 (14:49 +1100)]
insn: Implement isync instruction
The instruction works by redirecting fetch to nia+4 (hopefully using
the same adder used to generate LR) and doing a backflush. Along with
being single issue, this should guarantee that the next instruction
only gets fetched after the pipe's been emptied.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Thu, 17 Oct 2019 05:41:19 +0000 (16:41 +1100)]
icache & dcache: Fix store way variable
We used the variable "way" in the wrong state in the cache when
updating a line valid bit after the end of the wishbone transactions,
we need to use the latched "store_way".
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 16 Oct 2019 04:10:27 +0000 (15:10 +1100)]
dcache: Cleanup (mostly cosmetic)
Clearly separate the 2 stages of load hits, improve naming and
comments, clarify the writeback controls etc...
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Tue, 15 Oct 2019 05:21:32 +0000 (16:21 +1100)]
icache/dcache: Make both caches 32 lines, 2 ways
Adding lines seems to add only little extra as the BRAMs aren't
full, 2 ways is our current comprimise to limit pressure on small
FPGAs. We could go to 64 lines for a little more, but timing is
becoming a bit too right to my linking on the tags/LRU path of
the icache, so let's leave it at 32 for now.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Thu, 10 Oct 2019 00:25:16 +0000 (11:25 +1100)]
dcache: Introduce an extra cycle latency to make timing
This makes the BRAMs use an output buffer, introducing an extra
cycle latency. Without this, Vivado won't make timing at 100Mhz.
We stash all the necessary response data in delayed latches, the
extra cycle is NOT a state in the state machine, thus it's fully
pipelined and doesn't involve stalling.
This introduces an extra non-pipelined cycle for loads with update
to avoid collision on the writeback output between the now delayed
load data and the register update. We could avoid it by moving
the register update in the pipeline bubble created by the extra
update state, but it's a bit trickier, so I leave that for a latter
optimization.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 9 Oct 2019 13:40:46 +0000 (00:40 +1100)]
dcache: Add a dcache
This replaces loadstore2 with a dcache
The dcache unit is losely based on the icache one (same basic cache
layout), but has some significant logic additions to deal with stores,
loads with update, non-cachable accesses and other differences due to
operating in the execution part of the pipeline rather than the fetch
part.
The cache is store-through, though a hit with an existing line will
update the line rather than invalidate it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 9 Oct 2019 13:40:11 +0000 (00:40 +1100)]
icache: Reduce simulation warnings
This might slightly increase the logic in synthesis but avoids
us looking at uninitialized tags when not servicing an active
request
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 9 Oct 2019 13:38:03 +0000 (00:38 +1100)]
cache_ram: Add write-enables
They will be needed by the dcache
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Tue, 8 Oct 2019 12:26:23 +0000 (23:26 +1100)]
plru: Improve sensitivity list
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Anton Blanchard [Mon, 21 Oct 2019 09:15:37 +0000 (20:15 +1100)]
Merge pull request #112 from hughhalf/patch-1
Minor tweaks to README.md
Hugh [Mon, 21 Oct 2019 05:51:59 +0000 (16:51 +1100)]
Minor tweaks to README.md
Few tweaks based on a newcomers experience getting an Arty A7-100 up and running
Forgot to add DCO in initial PR, now corrected.
Signed-off-by: Hugh Blemings <hugh@blemings.org>
Anton Blanchard [Sat, 19 Oct 2019 23:09:42 +0000 (10:09 +1100)]
Merge pull request #110 from antonblanchard/misc
icache_tb: Improve test and include test file
Benjamin Herrenschmidt [Fri, 18 Oct 2019 05:41:05 +0000 (16:41 +1100)]
icache_tb: Improve test and include test file
The icache_test.bin file was missing. This adds it (along with a python3
script to generate it).
We also add better reporting on errors
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Anton Blanchard [Thu, 17 Oct 2019 06:37:49 +0000 (17:37 +1100)]
Merge pull request #109 from antonblanchard/misc
Misc updates from Ben
Anton Blanchard [Thu, 17 Oct 2019 06:16:09 +0000 (17:16 +1100)]
isel takes a CR bit, not a CR field
Fix a GHDL assert in isel.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Benjamin Herrenschmidt [Wed, 16 Oct 2019 06:47:08 +0000 (17:47 +1100)]
common: Reformat
No code change
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 16 Oct 2019 01:32:45 +0000 (12:32 +1100)]
execute1: Remove mux on "write_data" and "rc" outputs
Only "write_enable" needs to change, this shrinks the core a bit more
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 16 Oct 2019 01:11:16 +0000 (12:11 +1100)]
crhelpers: Constraint "crnum" integer
This seems to save quite a few LUTs
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 16 Oct 2019 01:28:19 +0000 (12:28 +1100)]
execute1: Reformat
No functional change
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 16 Oct 2019 01:05:36 +0000 (12:05 +1100)]
writeback: Remove a mux leg on data_in
Initialize to 0 forces the mux to have an extra leg fed with zeros.
Instead initialize data_in to one of the mux inputs
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Anton Blanchard [Wed, 16 Oct 2019 20:40:36 +0000 (07:40 +1100)]
Merge pull request #105 from paulusmack/writeback
Writeback
Paul Mackerras [Tue, 15 Oct 2019 20:56:15 +0000 (07:56 +1100)]
writeback: Eliminate inferred latch
This initializes data_in to all zeroes so that it doesn't become a
set of 64 inferred latches.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Tue, 15 Oct 2019 10:05:10 +0000 (21:05 +1100)]
Merge pull request #106 from paulusmack/master
wishbone_debug_master: Improve timing
Paul Mackerras [Tue, 15 Oct 2019 07:16:07 +0000 (18:16 +1100)]
wishbone_debug_master: Improve timing
The current code has the possibility that we could set reg_addr
or reg_ctrl and then increment reg_addr in the same cycle, resulting
in some long timing paths. Rearrange the code to make it clear
that we are not trying to add an auto-increment to data from
outside the module; in any given cycle we either set one of
reg_addr and reg_ctrl, or we possibly increment reg_addr.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 15 Oct 2019 05:26:36 +0000 (16:26 +1100)]
Remove execute2 stage
Since the condition setting got moved to writeback, execute2 does
nothing aside from wasting a cycle. This removes it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Tue, 15 Oct 2019 05:17:12 +0000 (16:17 +1100)]
Merge pull request #104 from paulusmack/master
Implement neg using OP_ADD
Paul Mackerras [Mon, 14 Oct 2019 03:39:23 +0000 (14:39 +1100)]
Do sign-extension instructions in writeback instead of execute1
This makes the exts[bhw] instructions do the sign extension in the
writeback stage using the sign-extension logic there instead of
having unique sign extension logic in execute1. This requires
passing the data length and sign extend flag from decode2 down
through execute1 and execute2 and into writeback. As a side bonus
we reduce the number of values in insn_type_t by two.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 14 Oct 2019 01:56:01 +0000 (12:56 +1100)]
writeback: Do data formatting and condition recording in writeback
This adds code to writeback to format data and test the result
against zero for the purpose of setting CR0. The data formatter
is able to shift and mask by bytes and do byte reversal and sign
extension. It can also put together bytes from two input
doublewords to support unaligned loads (including unaligned
byte-reversed loads).
The data formatter starts with an 8:1 multiplexer that is able
to direct any byte of the input to any byte of the output. This
lets us rotate the data and simultaneously byte-reverse it.
The rotated/reversed data goes to a register for the unaligned
cases that overlap two doublewords. Then there is per-byte logic
that does trimming, sign extension, and splicing together bytes
from a previous input doubleword (stored in data_latched) and the
current doubleword. Finally the 64-bit result is tested to set
CR0 if rc = 1.
This removes the RC logic from the execute2, multiply and divide
units, and the shift/mask/byte-reverse/sign-extend logic from
loadstore2.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Tue, 15 Oct 2019 04:20:34 +0000 (15:20 +1100)]
Merge pull request #103 from paulusmack/divider
Divider
Paul Mackerras [Mon, 14 Oct 2019 05:02:45 +0000 (16:02 +1100)]
Implement neg using OP_ADD
We have all the machinery in place to implement the neg instruction
as OP_ADD. Doing that means we can ditch OP_NEG, and saves about
66 slice LUTs on the A7-100.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 15 Oct 2019 03:59:15 +0000 (14:59 +1100)]
divider: Reduce delay in detecting 32-bit overflow
Timing analysis showed that even with the output register, timing
was still a bit tight in the output stage, where the carry has to
propagate all the way through the 64-bit negater, and we were then
testing the top 33 bits to determine if a 32-bit operation had
overflowed.
Instead of detecting overflow at the end, we watch for any 1
bits getting shifted into the top 32 bits of the quotient register
as we are doing the division. That is relatively easy to do and
simplifies the output stage.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Tue, 15 Oct 2019 01:49:06 +0000 (12:49 +1100)]
Merge pull request #102 from antonblanchard/gpr-hazard-5-c
Add CR hazard detection
Anton Blanchard [Tue, 15 Oct 2019 00:22:59 +0000 (11:22 +1100)]
Add CR hazard detection
To keep things simple we treat the CR as a single entity.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Tue, 15 Oct 2019 00:22:48 +0000 (11:22 +1100)]
Merge pull request #101 from antonblanchard/gpr-hazard-5-b
Add GPR hazard detection
Paul Mackerras [Mon, 14 Oct 2019 23:29:53 +0000 (10:29 +1100)]
divider: Add an output register
This puts the output of the divider through a register. With the
addition of the logic to detect overflow, the combinatorial output
logic of the divider was becoming a critical path. Adding the
output register adds a cycle to the latency of the divider but
helps make timing at 100MHz on the A7-100.
This also makes the valid, write_reg_enable and write_cr_enable
fields of the output be registered, which eliminates warnings
about register/latch pins with no clock.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Mon, 14 Oct 2019 05:20:07 +0000 (16:20 +1100)]
Remove issue restrictions on a number of instructions
Anything that isn't a load or store and anything that doesn't read the
CR can go as soon as its inputs are ready.
While we could also allow SPR read/write and carry read/write, we plan
to change them to be read in decode2 and written in writeback soon and
they will need separate hazard detection to be added.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Mon, 14 Oct 2019 02:27:45 +0000 (13:27 +1100)]
Add GPR hazard detection
Check GPRs against any writers in the pipeline.
All instructions are still marked single in pipeline at
this stage.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Mon, 14 Oct 2019 22:02:56 +0000 (09:02 +1100)]
Merge pull request #100 from antonblanchard/gpr-hazard-5-a
Separate issue control into its own unit
Anton Blanchard [Mon, 14 Oct 2019 02:14:04 +0000 (13:14 +1100)]
Merge pull request #99 from paulusmack/logical
Logical
Anton Blanchard [Mon, 14 Oct 2019 01:40:23 +0000 (12:40 +1100)]
Separate issue control into its own unit
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Paul Mackerras [Thu, 10 Oct 2019 04:09:41 +0000 (15:09 +1100)]
countzero: Add a testbench
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 11 Oct 2019 05:06:01 +0000 (16:06 +1100)]
countzero: Reorganize to have fewer levels of logic and fewer LUTs
By using 4:1 multiplexers rather than 2:1, this cuts the number of
levels of multiplexing from 4 to 2 and also reduces the total number
of slice LUTs required. Because we are now handling 4 bits at each
level, including the bottom level, the logic to do the priority
encoding can be factored out into a function that is used at each
level.
This rearranges the logic so that the encoding and selection of bits
is done whether or not the input operand is zero, and the if statement
testing whether the input is zero only affects what is assigned to
result. With this we don't get the inferred latches and we can go
back to using signals rather than variables.
Also add some comments about what is being done.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Sun, 13 Oct 2019 11:10:18 +0000 (22:10 +1100)]
Merge pull request #98 from antonblanchard/fix-mod
mod* doesn't have an RC form
Anton Blanchard [Sun, 13 Oct 2019 10:42:27 +0000 (21:42 +1100)]
mod* doesn't have an RC form
The RC bit should be ignored for mod* instructions.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 13 Oct 2019 04:36:37 +0000 (15:36 +1100)]
Merge pull request #96 from antonblanchard/clk_gen_bypass-fix
Fix clk_gen_bypass
Anton Blanchard [Sun, 13 Oct 2019 03:41:53 +0000 (14:41 +1100)]
Fix clk_gen_bypass
clk_gen_bypass needed updating after the addition of CLK_INPUT_HZ and
CLK_OUTPUT_HZ.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 13 Oct 2019 02:30:52 +0000 (13:30 +1100)]
Merge pull request #94 from antonblanchard/icbi-nop
decode: Handle icbi
Anton Blanchard [Sun, 13 Oct 2019 02:11:46 +0000 (13:11 +1100)]
Merge pull request #93 from antonblanchard/fifo-fix
Remove shared variable from fifo, and reformat
Anton Blanchard [Sun, 13 Oct 2019 01:59:14 +0000 (12:59 +1100)]
decode: Handle icbi
We will need a proper handler for icbi, but in the meantime treat it
as a nop.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 13 Oct 2019 01:57:23 +0000 (12:57 +1100)]
fifo: Reformat
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 13 Oct 2019 01:52:39 +0000 (12:52 +1100)]
fifo: Remove shared variable
The shared variable used for FIFO memory is not VHDL 2008 compliant.
I can't see why it needs to be a shared variable since reads and writes
update top and bottom synchronously, meaning they don't need same cycle
access to the FIFO memory.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sat, 12 Oct 2019 11:23:10 +0000 (22:23 +1100)]
Merge pull request #92 from paulusmack/divider
Divider
Anton Blanchard [Sat, 12 Oct 2019 11:18:37 +0000 (22:18 +1100)]
Merge pull request #91 from tgingold/gpr-file-fix
Fix register file size (there are 32 gprs).
Paul Mackerras [Fri, 11 Oct 2019 04:16:47 +0000 (15:16 +1100)]
divider: Return 0 for invalid and overflow cases, like P9 does
This adds logic to detect the cases where the quotient of the
division overflows the range of the output representation, and
return all zeroes in those cases, which is what POWER9 does.
To do this, we extend the dividend register by 1 bit and we do
an extra step in the division process to get a 2^64 bit of the
quotient, which ends up in the 'overflow' signal. This catches all
the cases where dividend >= 2^64 * divisor, including the case
where divisor = 0, and the divde/divdeu cases where |RA| >= |RB|.
Then, in the output stage, we also check that the result fits in
the representable range, which depends on whether the division is
a signed division or not, and whether it is a 32-bit or 64-bit
division. If dividend >= 2^64 or the result doesn't fit in the
representable range, write_data is set to 0 and write_cr_data to
0x20000000 (i.e. cr0.eq = 1).
POWER9 sets the top 32 bits of the result to zero for 32-bit signed
divisions, and sets CR0 when RC=1 according to the 64-bit value
(i.e. CR0.LT is always 0 for 32-bit signed divisions, even if the
32-bit result is negative). However, modsw with a negative result
sets the top 32 bits to all 1s. We follow suit.
This updates divider_tb to check the invalid cases as well as the
valid case.
This also fixes a small bug where the reset signal for the divider
was driven from rst when it should have been driven from core_rst.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 12 Oct 2019 05:15:20 +0000 (16:15 +1100)]
decode2: Fix 32-bit flag passed to divider
Previously the 32-bit flag passed to the divider was always wrong;
this fixes it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Tristan Gingold [Sat, 12 Oct 2019 04:56:32 +0000 (06:56 +0200)]
Fix register file size (there are 32 gprs).
Anton Blanchard [Fri, 11 Oct 2019 05:47:37 +0000 (16:47 +1100)]
Merge pull request #84 from classilla/master
Add logo
Anton Blanchard [Fri, 11 Oct 2019 05:46:03 +0000 (16:46 +1100)]
Merge pull request #89 from mikey/gitignore
Update gitignore for new test bench build files
Anton Blanchard [Fri, 11 Oct 2019 05:45:45 +0000 (16:45 +1100)]
Merge pull request #90 from antonblanchard/newcrf-inferred-latch
Don't infer latch for newcrf
Anton Blanchard [Fri, 11 Oct 2019 05:31:14 +0000 (16:31 +1100)]
Don't infer latch for newcrf
Always initialize newcrf to avoid inferring a latch.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Michael Neuling [Fri, 11 Oct 2019 05:02:04 +0000 (16:02 +1100)]
Update gitignore for new test bench build files
Just ignore all *_tb files
Signed-off-by: Michael Neuling <mikey@neuling.org>
Anton Blanchard [Thu, 10 Oct 2019 10:29:06 +0000 (21:29 +1100)]
Merge pull request #87 from antonblanchard/cmod-a7-freq
Fix cmod-a7 frequency
Anton Blanchard [Thu, 10 Oct 2019 09:59:49 +0000 (20:59 +1100)]
Fix cmod-a7 frequency
The cmod-a7 is ignoring the clk_frequency parameter and running at
100 MHz. Fix it.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Thu, 10 Oct 2019 09:32:47 +0000 (20:32 +1100)]
Merge pull request #86 from antonblanchard/outstanding-range
Limit outstanding range
Anton Blanchard [Thu, 10 Oct 2019 06:56:55 +0000 (17:56 +1100)]
Merge pull request #85 from antonblanchard/leadingzeroes-fix
Fix count-leading/trailing-zeroes
Anton Blanchard [Thu, 10 Oct 2019 06:47:15 +0000 (17:47 +1100)]
Merge pull request #79 from deece/uart_address
Tighten UART address
Anton Blanchard [Thu, 10 Oct 2019 06:14:55 +0000 (17:14 +1100)]
Limit outstanding range
outstanding can only ever be -1 to 2 at the moment (0 or 1 on a
rising clock edge). Vivado is synthesizing a much wider adder
which is silly. Constrain it with a range statement. This should
be good for timing and saves us about 85 LUTs.
This will get relaxed when we add more pipelining, but only by a
few bits.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Thu, 10 Oct 2019 03:36:23 +0000 (14:36 +1100)]
Fix count-leading/trailing-zeroes
The current code simulates correctly, but produces miscompares when synthesized
onto an FPGA. On closer inspection GHDL synthesis complains about inferred
latches and there does seem to be issues.
Convert it to variables that are always initialized to zero at the start of the
process.
Fixes: 24a4a796ce1e ("execute: Consolidate count-leading/trailing-zeroes implementations")
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Cameron Kaiser [Wed, 9 Oct 2019 04:00:00 +0000 (21:00 -0700)]
Add logo to README.md
Signed-off-by: Cameron Kaiser <classilla@floodgap.com>
Cameron Kaiser [Wed, 9 Oct 2019 03:57:20 +0000 (20:57 -0700)]
Add title image
Signed-off-by: Cameron Kaiser <classilla@floodgap.com>
Anton Blanchard [Wed, 9 Oct 2019 01:33:17 +0000 (12:33 +1100)]
Merge pull request #83 from paulusmack/logical
execute: Consolidate count-leading/trailing-zeroes implementations
Anton Blanchard [Wed, 9 Oct 2019 00:49:24 +0000 (11:49 +1100)]
Merge pull request #81 from antonblanchard/logical
Consolidate logical instructions
Anton Blanchard [Wed, 9 Oct 2019 00:47:30 +0000 (11:47 +1100)]
Merge pull request #82 from antonblanchard/icache-set-assoc
A new set associative icache from Ben
Paul Mackerras [Tue, 8 Oct 2019 21:55:43 +0000 (08:55 +1100)]
execute: Consolidate count-leading/trailing-zeroes implementations
This adds combinatorial logic that does 32-bit and 64-bit count
leading and trailing zeroes in one unit, and consolidates the
four instructions under a single OP_CNTZ opcode.
This saves 84 slice LUTs on the Arty A7-100.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Tue, 8 Oct 2019 07:46:01 +0000 (18:46 +1100)]
Consolidate logical instructions
Consolidate and/andc/nand, or/orc/nor and xor/eqv, using a common
invert on the input and output. This saves us about 200 LUTs.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Alastair D'Silva [Thu, 3 Oct 2019 00:16:53 +0000 (10:16 +1000)]
Tighten UART address
The current scheme has UART0 repeating throughout the UART address range.
This patch tightens the address checking so that it only occurs once.
Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Benjamin Herrenschmidt [Wed, 2 Oct 2019 12:17:31 +0000 (22:17 +1000)]
icache: Set associative icache
This adds support for set associativity to the icache. It can still
be direct mapped by setting NUM_WAYS to 1.
The replacement policy uses a simple tree-PLRU for each set.
This is only lightly tested, tests pass but I have to double check
that we are using the ways effectively and not creating duplicates.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 2 Oct 2019 09:06:53 +0000 (19:06 +1000)]
plru: Add a simple PLRU module
Tested in sim only for now
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 2 Oct 2019 06:17:42 +0000 (16:17 +1000)]
fetch2: Remove blank line
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Mon, 30 Sep 2019 08:17:10 +0000 (18:17 +1000)]
icache: Use narrower block RAMs
We only ever access the cache memory for at most the wishbone bus
width at a time. So having the BRAMs organized as a cache-line-wide
port is a waste of resources.
Instead, use a wishbone-wide memory and store a line as consecutive
rows in the BRAM.
This significantly improves BRAM usage in the FPGA as we can now use
more rows in the BRAM blocks. It also saves a few LUTs and muxes.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Fri, 27 Sep 2019 03:23:56 +0000 (13:23 +1000)]
fetch/icache: Fit icache in BRAM
The goal is to have the icache fit in BRAM by latching the output
into a register. In order to avoid timing issues , we need to give
the BRAM a full cycle on reads, and thus we souce the BRAM address
directly from fetch1 latched NIA.
(Note: This will be problematic if/when we want to hash the address,
we'll probably be better off having fetch1 latch a fully hashed address
along with the normal one, so the icache can use the former to address
the BRAM and pass the latter along)
One difficulty is that we cannot really stall the icache without adding
more combo logic that would break the "one full cycle" BRAM model. This
means that on stalls from decode, by the time we stall fetch1, it has
already gone to the next address, which the icache is already latching.
We work around this by having a "stash" buffer in fetch2 that will stash
away the icache output on a stall, and override the output of the icache
with the content of the stash buffer when unstalling.
This requires a rewrite of the stop/step debug logic as well. We now
do most of the hard work in fetch1 which makes more sense.
Note: Vivado is still not inferring an built-in output register for the
BRAMs. I don't want to add another cycle... I don't fully understand why
it wouldn't be able to treat current_row as such but clearly it won't. At
least the timing seems good enough now for 100Mhz, possibly more.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Tue, 1 Oct 2019 04:24:07 +0000 (14:24 +1000)]
fetch1: Simplify a bit
There is no need to have two different state records
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Wed, 25 Sep 2019 06:50:24 +0000 (16:50 +1000)]
icache: Reformat icache
No code change
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Anton Blanchard [Mon, 7 Oct 2019 23:20:22 +0000 (10:20 +1100)]
Merge pull request #78 from paulusmack/new-decode
New decode
Paul Mackerras [Mon, 7 Oct 2019 07:26:11 +0000 (18:26 +1100)]
Add a rotate/mask/shift unit and use it in execute1
This adds a new entity 'rotator' which contains combinatorial logic
for rotating and masking 64-bit values. It implements the operations
of the rlwinm, rlwnm, rlwimi, rldicl, rldicr, rldic, rldimi, rldcl,
rldcr, sld, slw, srd, srw, srad, sradi, sraw and srawi instructions.
It consists of a 3-stage 64-bit rotator using 4:1 multiplexors at
each stage, two mask generators, output logic and control logic.
The insn_type_t values used for these instructions have been reduced
to just 5: OP_RLC, OP_RLCL and OP_RLCR for the rotate and mask
instructions (clear both left and right, clear left, clear right
variants), OP_SHL for left shifts, and OP_SHR for right shifts.
The control signals for the rotator are derived from the opcode
and from the is_32bit and is_signed fields of the decode_rom_t.
The rotator is instantiated as an entity in execute1 so that we can
be sure we only have one of it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sun, 6 Oct 2019 04:21:27 +0000 (15:21 +1100)]
Generalize the mul_32bit and mul_signed fields of decode_rom_t
This changes the names of the mul_32bit and mul_signed fields of
decode_rom_t to is_32bit and is_signed, so they can be used with
other types of operations besides multiplies.
This plumbs the is_32bit and is_signed flags down into execute1,
though they are not used at this point.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 4 Oct 2019 09:26:37 +0000 (19:26 +1000)]
decode: Avoid multiplexing from instruction reg fields to regfile address ports
This aims to simplify the logic between the instruction image and
the register file read address ports and reduce the size of the decode
tables. With this patch, the input_reg_a column of the decode tables
can only select RA or zeroes, the input_reg_b column can only select
RB or a constant (0, -1, or an immediate value from the instruction),
and the input_reg_c columns can only select RS or zeroes.
That means that the rotate/shift/logical ops now have their first
input coming in via the input_reg_c column. That means we need to
add a read_data3 field to the Decode2ToExecuteType record, but that
will go away again when we split out the rotate/mask/logical ops to
their own unit.
As a related but not tightly connected change, this patch also sets
the read1_enable signal to the register file be 0 when RA=0 and the
input_reg_a for the instruction is RA_OR_ZERO (previously it was 1).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 4 Oct 2019 06:11:21 +0000 (16:11 +1000)]
Consolidate add/subtract instructions into a single op
All of the PPC add and subtract instructions, including carrying
and extended versions, do much the same arithmetic operation:
result = (I xor A) + B + C
where A is the value from RA, I provides a logical inversion of A
(i.e. I is 0 or -1), B is either from RB or is a constant 0 or -1,
and C is 0, 1 or the carry bit from XER (CA).
To consolidate all the add/subtract instructions into a single
OP_ADD, we add a column to decode_rom_t to indicate when A should
be inverted, and change the input_carry field to a 3-state selector
to select C in the equation above.
This also adds a new "CONST_M1" value for input_reg_b_t to indicate
that B is a constant -1. This allows us to implement addme and
subfme.
The addex instruction appears not to exist, so the comments referring
to it are removed.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Fri, 4 Oct 2019 00:32:13 +0000 (10:32 +1000)]
Merge pull request #80 from antonblanchard/misc
Reduce register file footprint
Paul Mackerras [Thu, 3 Oct 2019 22:25:53 +0000 (08:25 +1000)]
decode: Make all update-form indexed loads and stores use RA_OR_ZERO
Experimentation on POWER9 indicates that the invalid form of lbzux
with RA=0 uses just RB as the address, not R0 + RB. Extrapolating
this to all update-form loads and stores with RA=0, change all the
update-form loads and stores to use RA_OR_ZERO rather than RA.
This then means that all decode ROM entries with insn_type = LDST
have input_reg_a = RA_OR_ZERO.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Benjamin Herrenschmidt [Thu, 3 Oct 2019 02:38:49 +0000 (12:38 +1000)]
register_file: Move GPRs into distributed RAM
The register file is currently implemented as a whole pile of individual
1-bit registers instead of LUT memory which is a huge waste of FPGA
space.
This is caused by the output signal exposing the register file to the
outside world for simulation debug.
This removes that output, and moves the dumping of the register file
to the register file module itself. This saves about 8% of fpga on
the little Arty A7-35T.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Paul Mackerras [Wed, 2 Oct 2019 12:21:09 +0000 (22:21 +1000)]
decode: Remove const fields from decode_rom_t
The const* fields of decode_rom_t drove multiplexers in decode2 that
picked out various instruction fields and put them into the const*
fields of the Decode2ToExecute1Type record, from where they were
used in execute1. However, the code in execute1 can just as easily
use the appropriate fields of the original instruction word, since
that is now available in execute1. This therefore changes the
code to do that, resulting in smaller decode tables.
Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Benjamin Herrenschmidt [Tue, 1 Oct 2019 23:58:44 +0000 (09:58 +1000)]
debug/sim: Make connect/disconnect messages quieter
Those don't need to go to stderr, send them to stdout instead
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Paul Mackerras [Tue, 1 Oct 2019 05:49:07 +0000 (15:49 +1000)]
decode: Fix larx/stcx instructions to use RA_OR_ZERO not RA
The l?arx and st?cx. instructions are defined to use the normal indexed
mode address calculations, i.e. (RA|0) + RB. Fix their entries in the
decode table to say RA_OR_ZERO rather than RA.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 30 Sep 2019 05:03:06 +0000 (15:03 +1000)]
decode: Index minor op table with insn bits for opcode 31
This changes decode_op_31_array from being indexed by a ppc_insn_t
(which is derived from the instruction word by a whole series of
if/elsif statements) to being indexed directly by bits 10...1 of
the instruction word. With this we no longer need ppc_insn.
This then means that the decode1 stage doesn't distinguish between
mfcr and mfocrf, or between mtcrf and mtocrf, since those are
distinguished by the value in bit 20 of the instruction. To
accommodate that, execute1 changes so that the one op value (OP_MFCR)
does either the mfcr or the mfocrf behaviour depending on bit 20
of the instruction word; and similarly for mtcrf/mtocrf.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sun, 29 Sep 2019 06:41:52 +0000 (16:41 +1000)]
decode: Index minor op table with insn bits for opcode 30
This comprises the 64-bit rotate and mask instructions. In order to
reduce the table index to 3 bits, we combine rldcl and rdlcr into a
single op (OP_RLDCX), and choose the right mask at execute time based
on bit 1 of the instruction word.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>