Anton Blanchard [Sun, 21 Mar 2021 23:19:27 +0000 (10:19 +1100)]
Reformat writeback
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 23:17:49 +0000 (10:17 +1100)]
Reformat plru
Also fix a typo
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 23:09:39 +0000 (10:09 +1100)]
Reformat rotator
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 23:08:29 +0000 (10:08 +1100)]
Reformat divider
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 23:07:47 +0000 (10:07 +1100)]
Reformat countzero
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 23:07:33 +0000 (10:07 +1100)]
Reformat cr_file
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 23:06:03 +0000 (10:06 +1100)]
Reformat register_file
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 21 Mar 2021 22:56:12 +0000 (09:56 +1100)]
Reformat cache_ram
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Michael Neuling [Mon, 8 Feb 2021 23:06:03 +0000 (10:06 +1100)]
Merge pull request #274 from mikey/read-sprs
Fix reading DSISR/DAR before writing and add a test to read from all SPRs
Anton Blanchard [Mon, 4 Jan 2021 03:16:06 +0000 (14:16 +1100)]
Add a test to read from all SPRs
Make sure the SPRs are initialized and we can't read X state.
(Mikey: rebased and added console/bin file for testing)
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Michael Neuling [Mon, 8 Feb 2021 09:17:48 +0000 (20:17 +1100)]
Fix DAR/DSISR reading before they are written
If the DAR and DSISR are read before they are written, we assert with:
register_file.vhdl:55:25:@60195ns:(report note): Writing GPR 09 00000000XXXXXXXX
register_file.vhdl:61:17:@60195ns:(assertion failure): Assertion violation
This initialises DAR/DSISR to avoid this.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Michael Neuling [Mon, 8 Feb 2021 06:27:16 +0000 (17:27 +1100)]
Merge pull request #269 from paulusmack/pipeline
Rework load/store pipeline to achieve one load/store per cycle throughput
Michael Neuling [Mon, 8 Feb 2021 05:38:57 +0000 (16:38 +1100)]
Merge pull request #268 from paulusmack/btc
Implement branch target cache
Michael Neuling [Mon, 8 Feb 2021 03:34:53 +0000 (14:34 +1100)]
Merge pull request #273 from antonblanchard/wishbone-checking
Add some wishbone checking
Michael Neuling [Mon, 8 Feb 2021 03:26:53 +0000 (14:26 +1100)]
Merge pull request #267 from paulusmack/master
Improve architecture compliance and other miscellaneous changes
Anton Blanchard [Mon, 8 Feb 2021 01:15:53 +0000 (12:15 +1100)]
Add some wishbone checking
Check that stb, cyc and ack are never undefined. While not really needed
here, this also tests if --pragma synthesis_off/--pragma synthesis_on
works on all the tools we use.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Paul Mackerras [Mon, 28 Dec 2020 04:15:30 +0000 (15:15 +1100)]
core: Allow multiple loadstore instructions to be in flight
The idea here is that we can have multiple instructions in progress at
the same time as long as they all go to the same unit, because that
unit will keep them in order. If we get an instruction for a
different unit, we wait for all the previous instructions to finish
before executing it. Since the loadstore unit is the only one that is
currently pipelined, this boils down to saying that loadstore
instructions can go ahead while l_in.in_progress = 1 but other
instructions have to wait until it is 0.
This gives a 2% increase on coremark performance on the Arty A7-100
(from ~190 to ~194).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 7 Nov 2020 09:50:58 +0000 (20:50 +1100)]
loadstore: Convert to 3-stage pipeline
This makes loadstore use a 3-stage pipeline. For now, only one
instruction goes through the pipe at a time. Completion and writeback
are still combinatorial off the valid signal back from the dcache, so
performance should be the same as before. In future it should be able
to sustain one load or store per cycle provided they hit in the
dcache.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sun, 17 Jan 2021 21:55:56 +0000 (08:55 +1100)]
dcache: Fix bugs in pipelined operation
This fixes two bugs which show up when multiple operations are in
flight in the dcache, and adds a 'hold' input which will be needed
when loadstore1 is pipelined.
The first bug is that dcache needs to sample the data for a store on
the cycle after the store request comes in even if the store request
is held up because of a previous request (e.g. if the previous request
is a load miss or a dcbz).
The second bug is that a load request coming in for a cache line being
refilled needs to be handled immediately in the case where it is for
the row whose data arrives on the same cycle. If it is not, then it
will be handled as a separate cache miss and the cache line will be
refilled again into a different way, leading to two ways both being
valid for the same tag. This can lead to data corruption, in the
scenario where subsequent writes go to one of the ways and then that
way gets displaced but the other way doesn't. This bug could in
principle show up even without having multiple operations in flight in
the dcache.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 23 Dec 2020 02:57:40 +0000 (13:57 +1100)]
core: Send FPU interrupts to writeback rather than execute1
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 23 Dec 2020 01:27:22 +0000 (12:27 +1100)]
core: Send loadstore1 interrupts to writeback rather than execute1
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 23 Dec 2020 00:13:21 +0000 (11:13 +1100)]
core: Move redirect and interrupt delivery logic to writeback
This moves the logic for redirecting fetching and writing SRR0 and
SRR1 to writeback. The aim is that ultimately units other than
execute1 can send their interrupts to writeback along with their
instruction completions, so that there can be multiple instructions
in flight without needing execute1 to keep track of the address
of each outstanding instruction.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 27 Nov 2020 06:41:39 +0000 (17:41 +1100)]
execute1: Move CR result to data path process
Also work out in decode2 whether the instruction sets the XER common
bits.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 26 Nov 2020 11:10:30 +0000 (22:10 +1100)]
execute1: Move data-path logic out to a separate process
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 12 Nov 2020 11:07:33 +0000 (22:07 +1100)]
core: Track CR hazards and bypasses using tags
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 10 Nov 2020 22:42:17 +0000 (09:42 +1100)]
core: Restore bypass path from execute1
This changes the bypass path. Previously it went from after
execute1's output to after decode2's output. Now it goes from before
execute1's output register to before decode2's output register. The
reason is that the new path will be simpler to manage when there are
possibly multiple instructions in flight. This means that the
bypassing can be managed inside decode2 and control.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 10 Nov 2020 09:04:00 +0000 (20:04 +1100)]
core: Track GPR hazards using tags that propagate through the pipelines
This changes the way GPR hazards are detected and tracked. Instead of
having a model of the pipeline in gpr_hazard.vhdl, which has to mirror
the behaviour of the real pipeline exactly, we now assign a 2-bit tag
to each instruction and record which GSPR the instruction writes.
Subsequent instructions that need to use the GSPR get the tag number
and stall until the value with that tag is being written back to the
register file.
For now, the forwarding paths are disabled. That gives about a 8%
reduction in coremark performance.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 11 Nov 2020 11:10:38 +0000 (22:10 +1100)]
core: Crack branches that update both CTR and LR
This uses the instruction doubling machinery to convert conditional
branch instructions that update both CTR and LR (e.g., bdnzl, bdnzlrl)
into two instructions, of which the first updates CTR and determines
whether the branch is taken, and the second updates LR and does the
redirect if necessary.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 11 Nov 2020 07:11:04 +0000 (18:11 +1100)]
core: Crack update-form loads into two internal ops
This uses the instruction-doubling machinery to send load with update
instructions down to loadstore1 as two separate ops, rather than
one op with two destinations. This will help to simplify the value
tracking mechanisms.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 18 Dec 2020 22:25:04 +0000 (09:25 +1100)]
fetch1: Implement a simple branch target cache
This implements a cache in fetch1, where each entry stores the address
of a simple branch instruction (b or bc) and the target of the branch.
When fetching sequentially, if the address being fetched matches the
cache entry, then fetching will be redirected to the branch target.
The cache has 1024 entries and is direct-mapped, i.e. indexed by bits
11..2 of the NIA.
The bus from execute1 now carries information about taken and
not-taken simple branches, which fetch1 uses to update the cache.
The cache entry is updated for both taken and not-taken branches, with
the valid bit being set if the branch was taken and cleared if the
branch was not taken.
If fetching is redirected to the branch target then that goes down the
pipe as a predicted-taken branch, and decode1 does not do any static
branch prediction. If fetching is not redirected, then the next
instruction goes down the pipe as normal and decode1 does its static
branch prediction.
In order to make timing, the lookup of the cache is pipelined, so on
each cycle the cache entry for the current NIA + 8 is read. This
means that after a redirect (from decode1 or execute1), only the third
and subsequent sequentially-fetched instructions will be able to be
predicted.
This improves the coremark value on the Arty A7-100 from about 180 to
about 190 (more than 5%).
The BTC is optional. Builds for the Artix 7 35-T part have it off by
default because the extra ~1420 LUTs it takes mean that the design
doesn't fit on the Arty A7-35 board.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 28 Sep 2020 04:04:08 +0000 (14:04 +1000)]
execute1: Improve timing on comparisons
Using the main adder for comparisons has the disadvantage of creating
a long path from the CA/OV bit forwarding to v.busy via the carry
input of the adder, the comparison result, and determining whether a
trap instruction would trap. Instead we now have dedicated
comparators for the high and low words of a_in vs. b_in, and combine
their results to get the signed and unsigned comparison results.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 26 Sep 2020 09:58:46 +0000 (19:58 +1000)]
core: Reorganize execute1
This breaks up the enormous if .. elsif .. case .. elsif statement in
execute1 in order to try to make it simpler and more understandable.
We now have decode2 deciding whether the instruction has a value to be
written back to a register (GPR, GSPR, FPR, etc.) rather than
individual cases in execute1 setting result_en. The computation of
the data to be written back is now independent of detection of various
exception conditions. We now have an if block determining if any
exception condition exists which prevents the next instruction from
being executed, then the case statement which performs actions such as
setting carry/overflow bits, determining if a trap exception exists,
doing branches, etc., then an if statement for all the r.busy = 1
cases (continuing execution of an instruction which was started in a
previous cycle, or writing SRR1 for an interrupt).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 10 Oct 2020 03:34:01 +0000 (14:34 +1100)]
decode1: Implement tlbsync as a no-op
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 26 Sep 2020 07:19:57 +0000 (17:19 +1000)]
core: Make result multiplexing explicit
This adds an explicit multiplexer feeding v.e.write_data in execute1,
with the select lines determined in the previous cycle based on the
insn_type. Similarly, for multiply and divide instructions, there is
now an explicit multiplexer.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 22 Sep 2020 00:03:30 +0000 (10:03 +1000)]
decode1: Implement obsolete dst, dstst, dss instructions as no-ops
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 16 Dec 2020 09:41:08 +0000 (20:41 +1100)]
execute1: Move branch adder after register
This does the addition of the instruction NIA and the branch offset
after the register at the output of execute1 rather than before.
The propagation through the adder was showing up as a critical path
on the A7-100. Performance is unaffected and now it makes timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 12 Dec 2020 01:38:06 +0000 (12:38 +1100)]
decode: Add a facility field to the instruction decode tables
This makes it simpler to work out when to deliver a FPU unavailable
interrupt. This also means we can get rid of the OP_FPLOAD and
OP_FPSTORE insn_type values.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 16 Dec 2020 08:32:07 +0000 (19:32 +1100)]
decode1: Take an extra cycle for predicted branch redirects
This does the addition of NIA plus the branch offset from the
instruction after a clock edge, in order to ease timing, as the path
from the icache RAM through the adder in decode1 to the NIA register
in fetch1 was showing up as a critical path.
This adds one extra cycle of latency when redirecting fetch because of
a predicted-taken branch.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 14 Sep 2020 08:21:27 +0000 (18:21 +1000)]
tests: Add tests for lq/stq and lqarx/stqcx.
Lq and stq are tested in both BE and LE modes (though only 64-bit
mode) by the 'modes' test.
Lqarx and stqcx. are tested by the 'reservation' test in LE mode mode
(64-bit).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 31 Oct 2020 02:48:58 +0000 (13:48 +1100)]
loadstore1/dcache: Send store data one cycle later
This makes timing easier and also means that store floating-point
single precision instructions no longer need to take an extra cycle.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 12 Sep 2020 10:35:03 +0000 (20:35 +1000)]
core: Implement quadword loads and stores
This implements the lq, stq, lqarx and stqcx. instructions.
These instructions all access two consecutive GPRs; for example the
"lq %r6,0(%r3)" instruction will load the doubleword at the address
in R3 into R7 and the doubleword at address R3 + 8 into R6. To cope
with having two GPR sources or destinations, the instruction gets
repeated at the decode2 stage, that is, for each lq/stq/lqarx/stqcx.
coming in from decode1, two instructions get sent out to execute1.
For these instructions, the RS or RT register gets modified on one
of the iterations by setting the LSB of the register number. In LE
mode, the first iteration uses RS|1 or RT|1 and the second iteration
uses RS or RT. In BE mode, this is done the other way around. In
order for decode2 to know what endianness is currently in use, we
pass the big_endian flag down from icache through decode1 to decode2.
This is always in sync with what execute1 is using because only rfid
or an interrupt can change MSR[LE], and those operations all cause
a flush and redirect.
There is now an extra column in the decode tables in decode1 to
indicate whether the instruction needs to be repeated. Decode1 also
enforces the rule that lq with RT = RT and lqarx with RA = RT or
RB = RT are illegal.
Decode2 now passes a 'repeat' flag and a 'second' flag to execute1,
and execute1 passes them on to loadstore1. The 'repeat' flag is set
for both iterations of a repeated instruction, and 'second' is set
on the second iteration. Execute1 does not take asynchronous or
trace interrupts on the second iteration of a repeated instruction.
Loadstore1 uses 'next_addr' for the second iteration of a repeated
load/store so that we access the second doubleword of the memory
operand. Thus loadstore1 accesses the doublewords in increasing
memory order. For 16-byte loads this means that the first iteration
writes GPR RT|1. It is possible that RA = RT|1 (this is a legal
but non-preferred form), meaning that if the memory operand was
misaligned, the first iteration would overwrite RA but then the
second iteration might take a page fault, leading to corrupted state.
To avoid that possibility, 16-byte loads in LE mode take an
alignment interrupt if the operand is not 16-byte aligned. (This
is the case anyway for lqarx, and we enforce it for lq as well.)
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 28 Sep 2020 04:02:03 +0000 (14:02 +1000)]
loadstore1: Improve timing of data path from cache RAM to writeback
Work out select inputs for writeback mux a cycle earlier.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 30 Oct 2020 11:08:54 +0000 (22:08 +1100)]
dcache: Add more commentary, no code change
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 21 Sep 2020 01:41:46 +0000 (11:41 +1000)]
loadstore1: Decide on load formatting controls a cycle earlier
This helps timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 26 Nov 2020 11:08:47 +0000 (22:08 +1100)]
decode1: Fix decoding of recommended NOP instruction
We were decoding nop with the wrong major opcode. Fix it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 15 Dec 2020 22:34:56 +0000 (09:34 +1100)]
core_debug: Stop logging 256 cycles after trigger
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 12 Nov 2020 04:06:38 +0000 (15:06 +1100)]
core_debug: Add an address trigger to stop logging at a given address
This compares the address being fetched with the contents of a
register that can be set via DMI, and if they match, stops the
logging. Since this works on the address being fetched rather than
executed, it is subject to false positives.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Mon, 21 Sep 2020 01:37:10 +0000 (11:37 +1000)]
FPU: Don't use mask generator for rounding
Instead of using the mask generator in the rounding process, this uses
simpler logic to add in a 1 at the appropriate position (bit 2 or bit
31, depending on precision) and mask off the low-order bits. Since
there are only two positions at which the masking and incrementing
need to be done, we don't need the full generality of the mask
generator. This reduces the amount of logic and improves timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 19 Sep 2020 09:01:49 +0000 (19:01 +1000)]
FPU: Relax timing around multiplier output
At present there is a state transition in the handling of the fmadd
instructions where the next state depends on the sign bit of the
multiplier result. This creates a critical path which doesn't make
timing on the A7-100. To fix this, we make the state transition
independent of the sign of the multiplier result, which improves
timing, but means we take one more cycle to do a fmadd-family
instruction in some cases.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 22 Sep 2020 10:33:08 +0000 (20:33 +1000)]
mw_debug: Display terminated status when stopping
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 22 Sep 2020 10:22:24 +0000 (20:22 +1000)]
mw_debug: Extend to handle FPRs
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 13 Jan 2021 08:51:46 +0000 (19:51 +1100)]
Arty A7: Document pin connections for on-board headers
This adds, as comments, lines which would if uncommented define
properties which associate the pins of the headers on the Arty A7
board with FPGA pins. It also adds properties for LEDs 1--3, also
commented out for now.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 13 Jan 2021 08:45:57 +0000 (19:45 +1100)]
execute1: Update comments about XER forwarding
This deletes some commentary that is now out of date and replaces it
with a simple statement about the XER common bits being forwarded from
the output of execute1 to the input.
The comment being deleted talked about a hazard if an instruction that
modifies XER[SO] is immediately followed by a store conditional. That
is no longer a problem because the operands for loadstore1 are sent
from execute1 (and therefore have the forwarded value) rather than
decode2. This was in fact fixed in
5422007f83bf ("Plumb loadstore1
input from execute1 not decode2", 2020-01-14).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 7 Jan 2021 03:47:11 +0000 (14:47 +1100)]
Merge pull request #263 from antonblanchard/reset-pid
Initialize PID register
Paul Mackerras [Thu, 7 Jan 2021 03:46:42 +0000 (14:46 +1100)]
Merge pull request #262 from antonblanchard/reset-tb-decr
Reset TB and DECR
Paul Mackerras [Thu, 7 Jan 2021 03:46:04 +0000 (14:46 +1100)]
Merge pull request #259 from antonblanchard/dmi-reset
Reset JTAG/DMI
Anton Blanchard [Tue, 5 Jan 2021 09:17:37 +0000 (20:17 +1100)]
Merge pull request #265 from antonblanchard/another-spi-rxtx-reset-issu
Fix another reset issue in spi_rxtx
Anton Blanchard [Tue, 5 Jan 2021 09:16:50 +0000 (20:16 +1100)]
Merge pull request #264 from antonblanchard/reset-spi-txrx
Reset cmd_ready_o in spi_txrx
Anton Blanchard [Sun, 3 Jan 2021 05:46:28 +0000 (16:46 +1100)]
Initialize PID register
If the PID register is read before it is written we'll consume
X state data.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 3 Jan 2021 19:04:02 +0000 (06:04 +1100)]
Fix another reset issue in spi_rxtx
counter was X state after reset, initialize it.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 3 Jan 2021 18:44:23 +0000 (05:44 +1100)]
Reset cmd_ready_o in spi_txrx
Initialize bit_count so that cmd_ready_o isn't X state immediately
after reset.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 3 Jan 2021 05:07:46 +0000 (16:07 +1100)]
Reset TB and DECR
We don't care what the values of TB and DECR are after reset, but we
don't want the X state to propagate to other parts of the chip.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Mon, 21 Dec 2020 03:36:19 +0000 (14:36 +1100)]
Merge pull request #261 from antonblanchard/wishbone_layout
Make wishbone_master_out and wb_io_master_out match
Anton Blanchard [Mon, 21 Dec 2020 00:41:19 +0000 (11:41 +1100)]
Merge pull request #260 from paulusmack/misc
soc: Drive uart1_irq to 0 when we don't have UART1
Anton Blanchard [Sun, 20 Dec 2020 10:11:17 +0000 (21:11 +1100)]
Make wishbone_master_out and wb_io_master_out match
This makes it easier to parse the records in verilog because they
are getting flattened into an array of bits by ghdl/yosys.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Paul Mackerras [Sat, 19 Dec 2020 06:11:53 +0000 (17:11 +1100)]
fetch1: Fix debug stop
The ability to stop the core using the debug interface has been broken
since commit
bb4332b7e6b5 ("Remove fetch2 pipeline stage"), which
removed a statement that cleared the valid bit on instructions when
their stop_mark was 1.
Fix this by clearing r.req coming out of fetch1 when r.stop_mark = 1.
This has the effect of making i_out.valid be 0 from the icache. We
also fix a bug in icache.vhdl where it was not honouring i_in.req when
use_previous = 1.
It turns out that the logic in fetch1.vhdl to handle stopping and
restarting was not correct, with the effect that stopping the core
would leave NIA pointing to the last instruction executed, not the
next instruction to be executed. In fact the state machine is
unnecessary and the whole thing can be simplified enormously - we
need to increment NIA whenever stop_in = 0 in the previous cycle.
Fixes: bb4332b7e6b5 ("Remove fetch2 pipeline stage")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 17 Dec 2020 01:15:31 +0000 (12:15 +1100)]
soc: Drive uart1_irq to 0 when we don't have UART1
The tools complain about uart1_irq not being driven and not having a
default when HAS_UART1 is false. This sets it to 0 in that case.
Fixes: 7575b1e0c2b1 ("uart: Import and hook up opencore 16550 compatible UART")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Anton Blanchard [Tue, 15 Dec 2020 03:27:26 +0000 (14:27 +1100)]
Reset JTAG/DMI
request is never initialized and we leak X state control signals to other
parts of the core (eg dmi_wr). Add a reset.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Michael Neuling [Mon, 14 Dec 2020 21:54:56 +0000 (08:54 +1100)]
Merge pull request #256 from antonblanchard/flash-reset
Fix a few reset issues in flash controller
Paul Mackerras [Mon, 14 Dec 2020 21:51:33 +0000 (08:51 +1100)]
Merge pull request #257 from antonblanchard/nofpu-fix
Fully initialize FPU buses when FPU is disabled
Anton Blanchard [Mon, 14 Dec 2020 05:54:07 +0000 (16:54 +1100)]
Fix an issue in flash controller when BOOT_CLOCKS is false
If BOOT_CLOCKS is false we currently get stuck in the flash
state machine. This patch from Ben fixes it.
Also fix an x state issue I see in icarus verilog where we need
to reset auto_state.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sun, 13 Dec 2020 05:01:45 +0000 (16:01 +1100)]
Fully initialize FPU buses when FPU is disabled
Some of the bits in the FPU buses end up as z state. Yosys
flags them, so we may as well clean it up.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Sat, 12 Dec 2020 02:19:52 +0000 (13:19 +1100)]
Fix a few reset issues in flash controller
Our flash controller fails when simulating with iverilog. Looking
closer, both wb_stash and auto_last_addr are X state, and things
fall apart after they get used.
Initialise them both fixes the iverilog issue.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Tue, 8 Dec 2020 10:36:00 +0000 (21:36 +1100)]
Merge pull request #255 from antonblanchard/log-length
Add LOG_LENGTH to top-generic.vhdl
Anton Blanchard [Tue, 8 Dec 2020 10:35:25 +0000 (21:35 +1100)]
Merge pull request #254 from antonblanchard/fix-verilator
Add verilator FPGA target
Anton Blanchard [Tue, 8 Dec 2020 08:18:34 +0000 (19:18 +1100)]
Add LOG_LENGTH to top-generic.vhdl
The other top level files allow LOG_LENGTH to be configured.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Mon, 7 Dec 2020 23:50:48 +0000 (10:50 +1100)]
Add verilator FPGA target
Our Makefiles need some work, but for now create an FPGA target:
make FPGA_TARGET=verilator microwatt-verilator
ghdl and yosys can use containers using PODMAN=1 or DOCKER=1
options.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Mon, 7 Dec 2020 11:04:46 +0000 (22:04 +1100)]
Merge pull request #253 from antonblanchard/fix-verilator
Fix verilator build
Anton Blanchard [Mon, 7 Dec 2020 10:07:14 +0000 (21:07 +1100)]
Fix verilator build
yosys and verilator did not like us passing in the verilog and
exporting it again. Pass the source directly to verilator instead.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Michael Neuling [Mon, 7 Dec 2020 05:20:09 +0000 (16:20 +1100)]
Merge pull request #252 from antonblanchard/hello-world-in-8k
Reduce hello_world footprint to fit in 8kB
Anton Blanchard [Sun, 6 Dec 2020 20:17:38 +0000 (07:17 +1100)]
Fix ghdl warning due to variable shadowing in icache
Fix a couple of ghdl warnings:
icache.vhdl:387:21:warning: declaration of "i" hides constant "i" [-Whide]
icache.vhdl:400:17:warning: declaration of "i" hides constant "i" [-Whide]
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Anton Blanchard [Thu, 3 Dec 2020 09:29:40 +0000 (20:29 +1100)]
Reduce hello_world footprint to fit in 8kB
When building with yosys we assume hello_world fits in 8kB. There's
enough free space that we can adjust the linker script to make it fit.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Michael Neuling [Tue, 1 Dec 2020 00:25:08 +0000 (11:25 +1100)]
Merge pull request #249 from paulusmack/master
Sundry bug fixes, plus implement mtmsr
Michael Neuling [Tue, 1 Dec 2020 00:10:41 +0000 (11:10 +1100)]
Merge pull request #250 from umarcor/containers
makefile: update synthesis containers
Michael Neuling [Tue, 1 Dec 2020 00:06:19 +0000 (11:06 +1100)]
Merge pull request #251 from umarcor/ci/containers
ci: use job.container
umarcor [Mon, 30 Nov 2020 21:17:13 +0000 (22:17 +0100)]
ci: use job.container
Signed-off-by: umarcor <unai.martinezcorral@ehu.eus>
umarcor [Thu, 26 Nov 2020 05:14:55 +0000 (06:14 +0100)]
makefile: update synthesis containers
Signed-off-by: umarcor <unai.martinezcorral@ehu.eus>
umarcor [Mon, 30 Nov 2020 21:07:57 +0000 (22:07 +0100)]
makefile: whitespace cleanup
Signed-off-by: umarcor <unai.martinezcorral@ehu.eus>
Paul Mackerras [Tue, 24 Nov 2020 01:00:48 +0000 (12:00 +1100)]
tests/misc: Add a test for correct CTR and LR updating by branches
This adds a test with a bdnzl followed immediately by a bdnz, to check
that CTR and LR both get evaluated and written back correctly in this
situation.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 24 Nov 2020 00:53:17 +0000 (11:53 +1100)]
execute1: Fix forwarding of result when doing delayed LR update
Random execution testcases showed that a bdnzl which doesn't branch,
followed immediately by a bdnz, uses the wrong value for CTR for the
bdnz. Decode2 detects the read-after-write hazard on CTR and tells
execute1 to use the bypass path. However, the bdnzl takes two cycles
because it has to write back both CTR and LR, meaning that by the time
the bdnz starts to execute, r.e.write_data no longer contains the CTR
value, but instead contains zero.
To fix this, we make execute1 maintain the written-back value of CTR
in r.e.write_data across the cycle where LR is written back (this is
possible because the LR writeback uses the exc_write_data path).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 21 Nov 2020 02:54:14 +0000 (13:54 +1100)]
execute1: Fix writing LR for bdnzl/bdzl instructions
Branch instructions which do a redirect and write both CTR and LR were
not doing the write to LR due to a logic error. This fixes it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 21 Nov 2020 01:45:24 +0000 (12:45 +1100)]
core: Implement mtmsr instruction
This is like mtmsrd except it only alters the lower 32 bits of the MSR.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Fri, 25 Sep 2020 08:14:18 +0000 (18:14 +1000)]
tests/trace: Test trace interrupt vs. FP unavailable interrupt
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Sat, 3 Oct 2020 10:08:11 +0000 (20:08 +1000)]
execute1: Fix bug in trace interrupt vs. ITLB miss
If an instruction fetch results in an instruction TLB miss, an
OP_FETCH_FAILED instruction is sent down the pipe. If the MSR[TE]
field is set for instruction tracing, the core currently considers
that executing the OP_FETCH_FAILED counts as having executed one
instruction and so generates a trace interrupt on the next valid
instruction, meaning that the trace interrupt happens before the
desired instruction rather than after it.
Fix this by not tracing OP_FETCH_FAILED instructions.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Michael Neuling [Thu, 17 Sep 2020 02:04:26 +0000 (12:04 +1000)]
Merge pull request #245 from paulusmack/fpu
Add a simple FPU
Michael Neuling [Thu, 17 Sep 2020 01:43:54 +0000 (11:43 +1000)]
Merge pull request #244 from paulusmack/master
Implement trace interrupts plus decode improvements
Paul Mackerras [Sat, 12 Sep 2020 10:13:24 +0000 (20:13 +1000)]
FPU: Do masking after adder rather than on A input
The masking enabled by opsel_amask is only used when rounding, to trim
the rounded result to the required precision. We now do the masking
after the adder rather than before (on the A input). This gives the
same result and helps timing. The path from r.shift through the mask
generator and adder to v.r was showing up as a critical path.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 1 Sep 2020 05:28:19 +0000 (15:28 +1000)]
FPU: Decide on mask length a cycle earlier
This moves longmask into the reg_type record, meaning that it now
needs to be decided a cycle earlier, in order to help timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 1 Sep 2020 05:09:17 +0000 (15:09 +1000)]
FPU: Decide on A input selection a cycle earlier
This moves opsel_a into the reg_type record, meaning that the A
multiplexer input now needs to be decided a cycle earlier. This helps
timing by eliminating the combinatorial path from r.state and other
things to opsel_a and thence to in_a and result.
This means that some things now take an extra cycle, in particular
some of the exception cases such as when one or both operands are
NaNs. The NaN handling has been moved out to its own state, which
simplifies the logic for exception cases in other places. Additions
or subtractions where FRB's exponent is smaller than FRA's will
also take an extra cycle.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 1 Sep 2020 01:13:17 +0000 (11:13 +1000)]
FPU: Add comments specifying the expectation of r.shift for each state
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>