with no additional state,
including when the PC was not aligned on a 4-byte boundary.
-Whilst (1) and (3) make perfect sense, (2) makes no sense at all given that, inspection of "ori" and others, I-Form 16 bit immediates is the "norm" for v2.06/7 and v3.0B standard instructions. (2) in effect **is** a 32 bit instruction. (2) **is not** a 16 bit instuction. Why "reinvent" an encoding that is 32 bit, when there already exists a 32 bit encoding that does the exact same job?
+Whilst (1) and (3) make perfect sense, (2) makes no sense at all given that, as inspection of "ori" and others show, I-Form 16 bit immediates is the "norm" for v2.06/7 and v3.0B standard instructions. (2) in effect **is** a 32 bit instruction. (2) **is not** a 16 bit instruction.
+
+*Why "reinvent" an encoding that is 32 bit, when there already exists a 32 bit encoding that does the exact same job?*
Consequently, we do **not** envisage a scenario where (2) would ever be implemented, nor in the future would this Compressed Encoding be extended beyond 16 bit. Compressed is Compressed and is **by definition** limited to precisely - and only - 16 bit.
-The additional reason why that is the case is because VLE is exceptionally complex to implement. In a single-issue, low clock rate, "Embedded" environment for which VLE was originally designed, VLE was perfectly well matched.
+The additional reason why that is the case is because VLE is exceptionally complex to implement. In a single-issue, low clock rate "Embedded" environment for which VLE was originally designed, VLE was perfectly well matched.
However this Compressed Encoding is designed for High performance multi-issue systems *as well* as Embedded scenarios, and consequently, the complexity of "deep packet inspection" down into the depths of a 16 bit sequence in order to ascertain if it might not be 16 bit after all, is wholly unacceptable.