3-SAT – Why is *this* algorithm slow?

Forums Personal Topics Unbidden Thoughts 3-SAT – Why is *this* algorithm slow?

This topic contains 1 reply, has 1 voice, and was last updated by  Josh Stern April 23, 2023 at 8:16 pm.

  • Author
    Posts
  • #126347

    Josh Stern
    Moderator

    Proving P != NP is hard to think about because it means proving failure for all efficient algorithms in some cases. Since it’s knife edge hard, we expect to use up all of the problem difficulty in any such proof.

    3-SAT is tight. So perhaps we can say any algorithm for proving an answer to the 3-SAT Question must proceed through a rational series of steps regarding what it can conclude about the entire set of conjuncts and variables in the problem state space at any given stage of the algorithm. So perhaps that is the way to represent an “algorithm” in this negative quest. A series of proof so far or discrete probability so far, that proceeds in a series of stages. Can one prove that for every rationally valid prefix of a such a sequence, there is a way of arranging the unexamined state of the input problem so that the algorithm still has a remaining amount of info that needs to be gained that is exponential in the problem size? No…for any given N, we could pre-construct a lookup table for the problem form that would give a polynomial time & space solution (assuming an arbitrary sort ordering of the conjuncts, (N | 3)*6 possibilities for each conjunct). But the nature of 3-SAT is that if we stop building tables at any given finite N, the complexity of solving for larger N using those smaller tables grows exponentially again.

    The tricky demands of P != NP is that we are asked to fix the finite size of our computing algorithm/engine/gate structure/rational oracle ahead of time while the tape size of the 3-SAT problem can grow to unbounded length. So we figure that for any given size computing algorithm, we can pick a tape length N large enough so that after a polynomial number of steps, there is some fraction of bits in the problem that can be rearranged without affecting the current computational state space of the compute engine…why? some kind of counting argument about symmetry.

    The point is that the algorithm can only consider so many branches, each based on so many bits, and then reach so many states based on those decisions. (edit: The algorithm could read all the bits if it resulted in a decision that eliminated the uncertainty in the other branches…).

    This argument I wanted to make is not quite right:
    [By counting, there are a lot of bits that didn’t affect what the algorithm did. There identity is not spelled out, but they exist, and by construction they can take on all other syntactically valid values without affecting where the algorithm is, so it has no purchase on whether they are causing an assignment to be unsatisfiable. ] The algorithm could, for example, crypto hash the entire problem plus random salt, and make a decision based on the result. It wouldn’t help to solve the problem, but it would invalidate the statement.

    It should be possible to describe the progress of any deterministic algorithm A for 3-SAT and any given input Tape T describing M conjuncts in this way: after L computation steps on the tape T, A has implicitly proven that K/M are satisfiable. Why? Because A has to get to M/M by the time it quits for those set of problems where the answer is “Yes”. The attempt at an efficient solution described above hints at how 3-SAT problems might be constructed in a way that slows the growth of K. As the number of variables and conjuncts M grow to unbounded length while the algorithm box stays fixed, P != NP implies that K/M can’t grow fast enough in a polynomial algorithm. Reading all the bits (and hashing them for good measure!) is not enough to be assured of a correct K. It would be sufficient to prove that every poly time algorithm making a fast K claim for arbitrarily large problems is wrong, because there are defective possibilities that would have led to the same algorithmic behavior.

    Edit: We can also say this: If there is a P time and space algorithm, then there is also a P time & space algorithm that keeps us up to date, as it works, on any tangible progress it has made in each conjunct regarding which subset of variable assignments might be globally consistent and doing the local job there. Each 3 disjunct starts out with 5 possible triples. If they are all eliminated in any conjunct then the algorithm knows that satisfiability is impossible and can terminate with a “No”. In order to terminate correctly with a “Yes”, on some satisfiable problems, it has to somehow complete the work of checking that at least 1 good candidate remains valid in each conjunct. We can conceive of a process that doesn’t notice which of the 5 are correct, but somehow says that there is 1 success in some subset of the 5. The time it takes to make notes about progress in this way does not change the P question.

    The negative proof strategy is to argue that large N 3-SAT can be arranged in ways such that progress is always too slow.

    Then the demon is free to play with those bits, which are still a constant factor of N…

    • This reply was modified 2 years, 8 months ago by  Josh Stern.

You must be logged in to reply to this topic.