Properly cache assertions in static learning preprocessing pass. (#7242)
authorMathias Preiner <mathias.preiner@gmail.com>
Thu, 30 Sep 2021 17:20:04 +0000 (10:20 -0700)
committerGitHub <noreply@github.com>
Thu, 30 Sep 2021 17:20:04 +0000 (17:20 +0000)
commit5a824c9e67789a529e34b7e2a7229986412bf979
tree94391068e0b6f074937015aae86d67f258f5938a
parent720e3afc907f429cf8105ee49b3628ddaacbf7a3
Properly cache assertions in static learning preprocessing pass. (#7242)

The cache is user-context dependent and guarantees that ppStaticLearn is only called once per assertion in a given context - if assertions are popped from the context and added again ppStaticLearn is called again on that assertion. Further, the preprocessing pass now also handles the splitting of top-level AND terms. Before that each theory was responsible for splitting and caching. Refactoring the corresponding ppStaticLearn theory methods will be part of future PRs.

Marking do-not-merge until cluster runs finished.
src/preprocessing/passes/static_learning.cpp
src/preprocessing/passes/static_learning.h