› Forums › Personal Topics › Unbidden Thoughts › Software Constructs For Heavy Relational SAT
This topic contains 3 replies, has 1 voice, and was last updated by
josh December 19, 2021 at 2:16 pm.
-
AuthorPosts
-
December 19, 2021 at 2:02 pm #108167

joshIn Prolog, a “top level” function examining each tuple can call many helper functions that see partially fixed versions of a proposed tuple try while they look for success or failure of the remaining tuple positions. In a traditional software context, the “trail” can be thought of as a hashmap for variables where each value object is a stack of tries & search operations can push new parts on each stack or pop the stack back up to some given backtracking point. There are various tricks to speeding up the popping of all stacks back to a given search branch point. Prolog optimizations, trying to use this form for all computations, emphasize this efficiency. In our proposed context, we care more about other features & more efficient search *strategies* rather than the K factor of stack popping.
Edit: Structures that are typically built & passed around in Prolog are very much like Javascript objects/JSON – with some fields holding wild card variables that have not yet be instantiated along some search branch. If we generalize away from Prolog’s depth first searching, then variable try backtracking is along branches of an acylic tree rather than a stack. An interesting algorithm class to try for efficiency here is using algorithms from garbage collectors not for the literal memory reclaim when that is handled by the host language, but rather using the same basic algorithm for the epoch reclaim when particular depths of certain branches are no longer “roots” (think more like real plants here!). Backtracking management could be run that GC now with this def of roots.
Say we generalize & parallelize Prolog with a framework that optimizes an extended real valued goodness/penalty function (fail is a specially bad value). At some such step we try variations V1,V2,….VN for subsets of our tuple substitution. The results may frequently contain a tie for “best” meaning just “okay so far” & a bunch of failure values. There may also be okay values that are worse than best. Now what is the fastest way to clean up & launch the next search probe in cases where we are not done? My choice of search algorithm may have had a say in picking hte V1…VN. Abstractly I am moving forward by incrementing the counter for my search branch epoch and cleaning up the failure or unwanted set of Vi, pruning away all the garbage that was created after their last acceptable choice. The best implementation of that may depend on issues like whether my search algorithm is setting variables 1 by 1 or in large groups. So maybe my generic version here runs globalCleanUpTheBadResults(Me) (possibly a no-op) and resetCandidateStackTheBadResults(Me) (could also be a no-op, but the 2 functions need to coordinate). And then also coalesceIdenticalTries(Me) where different ordering starts were currently overlapping going forward.
-
December 19, 2021 at 2:14 pm #108168

joshA good example for high complexity: A real time strategy game AI trying to decide when/if/how/what of launching a risky campaign.
-
December 19, 2021 at 2:16 pm #108169

joshWhen & where to do some grocery shopping is a similar example. Or when where & how to build a custom house.
-
-
AuthorPosts
You must be logged in to reply to this topic.