OpenCAPI, assuming that the map of edges in any given arbitrary data graph
could be kept by the main CPU in-memory, could distribute and delegate
a limited-capability deterministic but most importantly *data-dependent*
-node-walking schedule actually right down into the memory itself (on the other side of that L1-4 cache barrier). A miniature processor analysed
-the data it had read (at the Memory), and determine if it should
+node-walking schedule actually right down into the memory itself (on the other side of that L1-4 cache barrier). A miniature processor
+(non-Turing-complete) analysed
+the data it had read (at the Memory), and determined if it should
notify the main processor that this "Node" is worth investigating,
or if the Graph node-walk should split in a different direction.
Thanks to the OpenCAPI Standard, which takes care of Virtual Memory
had something
as powerful as OpenCAPI as part of that picture.
+The fact that Neural Networks may be expressed as arbitrary Graphs,
+and comprise Sparse Matrices, should also have been noted by the reader
+interested in AI.
+
**Snitch**
Snitch is an elegant Memory-Coherent Barrel-Processor where registers