top | item 35726855

The 2-MAXSAT Problem Can Be Solved in Polynomial Time

61 points| 9NRtKyP4 | 2 years ago |arxiv.org

102 comments

order

tgflynn|2 years ago

I've spent a great deal of time over the last decade+ trying to find an efficient algorithm for NP hard problems and one thing I've learned is that this field is a graveyard for seemingly good ideas. Therefore I tend to be very skeptical of any paper that claims to solve such a problem but doesn't provide any empirical evidence that their solution works. I mean I've had hundreds of ideas and very few of them have survived without counterexamples appearing even for very small instance sizes.

I first saw this paper yesterday when it was posted on HN but didn't make it to the front page and I'm uncertain about how much energy to put into it (unfortunately my health gives me very little to spare). So far I've only read the introduction and it doesn't obviously involve any advanced mathematics that I'm unfamiliar with so I could perhaps attempt an implementation.

In addition to what I said above another factor arguing against investing much time in this is the fact that this was first published last year and nothing seems to have come of it in that time. Moreover it appears that the current paper is a revision from the one that was published last year. Does anyone know what the history of this was ? Were bugs found in the first version ?

likpok|2 years ago

> So far I've only read the introduction and it doesn't obviously involve any advanced mathematics that I'm unfamiliar with so I could perhaps attempt an implementation.

This is a bad sign for the paper. The prior cases of solving big foundational problems I'm aware of all introduce substantial new math. So much so that the actual problem is something of an afterthought. Wiles's proof of Fermat's last theorem is substantial.

It's not impossible that this is legitimate, but... it seems unlikely that P=NP has evaded proof for so long if the solution were straightforward. There are also a LOT of plausible-but-flawed proofs -- it's not unsolved for a lack of trying.

pfedak|2 years ago

Unsurprisingly, there's nothing here. Most of the paper describes a brute force search across all possible variable assignments in the form of a graph (with some pointless polynomial improvements like making it a trie), where you build a path of vertices representing each set of truth values that satisfies a given expression. This clearly has exponential size, which the author alludes to in the "improvements" section by noting it does "redundant work". This is addressed by collapsing the exponential graph down to have only one vertex for each variable*expression pair (if you ignore the trie) and adding exponentially many labels for the different paths to reach a given vertex. (incidentally, to the extent that it's described clearly, it seems like the improved layered graph would basically be the same as the original non-layered graph)

The final complexity discussion uses the graph size constraint gained by the "improvement" but doesn't consider how to handle the extra labeling meaningfully. Basically, the pre- and post-improvement algorithms put the exponential work in different spots, and the sloppiness of the algorithm description (I mean, really, why tell us you're using a stack for BFS and then have "determine the subset of satisfied constraints" as a step) makes it easy to ignore.

I'm also being a little generous with the algorithm itself. As described, some of the trie optimizations seem to make certain combinations of satisfied expressions impossible to notice, but I think it's not a big deal to make this part work. The properties of the trie structure (and of sorting the variables by occurrence, for that matter) don't seem to be used.

BrejlaCZ|2 years ago

I haven't really understood how the proposed simplification should work... It's clearly exponential without it, but I am not sure about the supposed added labels. What should those be, conjunction and group subsets? Does that make them exponential?

The proofs and even explanations in this paper aren't very clear to me...

tgflynn|2 years ago

I'm stuck in the III.A "Main Idea" section where he says:

> Doing this enables us to represent each Di as a variable sequence, but with all the negative literals being removed. See Table I for illustration.

What justification does he have for throwing out negated variables ? If you do that the problem likely becomes trivial, but has nothing to do with the original problem.

mabbo|2 years ago

I have a hard time taking seriously anyone who publishes a paper like this truly believing they've solved P=NP.

Remember the "faster than light neutrinos" thing? They published their stuff basically saying "Okay, we're really uncomfortable with this result so can someone explain what we've missed here?".

Papers like this always feel like the author is surprised anyone is even doubting them. "What, it's just the hardest problem in your field, worthy of a Millennium prize, and I'm publishing it in some lesser-known journals with little peer review. What of it?"

Come on. Have some modesty. Have some self-doubt! Reach out to Terry Tao and you know he'll happily explain your mistake and then help you write a better paper.

Mizza|2 years ago

All of this, but don't please don't bother Terry with crackpot P=NP ideas.

rlili|2 years ago

For me, the following is a bit that seems particularly prone to be invalidated, even more so considering that the author doesn't present any proof of it:

> Here, we notice that if we had one more span, <v3 , v4 , v5 >, for example, it would be connected to <v1 , v2 , v3 >, but not overlapped with <v1 , v2 , v3 >. Being aware of this difference is important since the overlapped spans imply the consecutive ‘’s, just like <v1, v1, v2> and <v1, v2, v3>, which correspond to two consecutive ‘’s: (c2 , ) and (c3 , ). Therefore, the overlapped spans exhibit some kind of transitivity. That is, if s1 and s2 are two overlapped spans, the s1 ∪ s2 must be a new, but bigger span. Applying this operation to all the spans over a p-path, we will get a ’transitive closure’ of overlapped spans.

dvdkhlng|2 years ago

I already have problems following Proposition 1 where they introduce a different DNF with twice the clauses of the original CNF than claim that maximizing the satisfied clauses in the DNF would also yield a maximum satisfied solution for the CNF.

It's a pitty they don't release any source code that we can unit-test against problems with known solutions (while also profiling run-time to stay within polynomial bounds) :)

edit: what am I missing, their reduction to a DNF seems unsuitable to me?

I read proposition 1 [1] to mean: they find an optimal solution for the DNF V∪X, then expect the corresponding CNF C to have at least as many satisfied clauses. However isn't that insufficient? I mean there could be an even better solution with more satisfied clauses in C, that are totally missing when only looking for solutions for V∪X. But maybe I'm just misunderstanding something.

[1] https://arxiv.org/pdf/2304.12517.pdf

sp332|2 years ago

If you put a \ in front of your * you can get the * to show up instead of italicizing. *Example*

Ono-Sendai|2 years ago

Since the paper claims to give an algorithm for solving an NP-hard problem, e.g. it's a constructive proof, not just an existence proof, then presumably the author should be able to reduce some other problems to this one (2-Maxsat) and solve them as a demonstration.

tromp|2 years ago

I'd be more inclined to believe their P=NP claim if they used it to grab all of Satoshi's bitcoins. That should be the first order of business for anyone finding an efficient algorithm for the ECDLP (Elliptic Curve Discrete Log Problem), as implied by their constructive proof.

scythe|2 years ago

Such a reduction is known to exist, but that doesn't mean it's necessarily easy to implement. It would suffice to find some large instances of 2-maxsat and solve those.

chriskanan|2 years ago

Usually when I see these things, I check if the author seems to have the right background. Yangjun Chen is a full professor at the University of Manitoba, but he doesn't seem to work in computer science theory.

It looks like the paper was previously published in a relatively low impact AI conference last year. It seems like it should be in FOCS, STOC, or a prestigious math journal to have significant credibility.

pclmulqdq|2 years ago

Many of those journals don't take P=NP proofs because they often come from quacks. Also, I wouldn't be surprised to see the start of a full proof coming from a non-theorist. Theorists have failed to make headway on P=NP for decades, so it would be reasonable to assume that some insight from a different place would be required.

js8|2 years ago

Well, this is somewhat heartbreaking for me. I haven't read the paper, but the result sounds very plausible to me.

I am also an amateur working on P=NP. Last week, I think I also proved that P=NP, but with a different method, and was about to seek publication.

My result seems very similar to his, yet very different. I can prove that class of SAT which is intersection of 2SAT and XORSAT is NP-complete by reduction to 3-SAT. Then I follow the approach in Melville's Krom 1967 paper on 2-SAT, and prove that certain polynomial-sized logic (that corresponds to the intersection) is refutable complete. So you can essentially generate all formulas in that logic and if you don't find contradiction, the instance is satisfiable.

I have also did some preliminary testing of my method, and was able to factor small integers with it. However, there was a bug.

So, to sum up, I am not surprised that P=NP with a constructive and efficient algorithm. Take it for what you want. The future is gonna be interesting (crypto DOOMSDAY).

FartyMcFarter|2 years ago

I don't understand. You said that you thought you proved P=NP, but then it turned out you hadn't.

How does this help to support your belief that P=NP has been solved by someone else? Surely it wouldn't surprise you if it turns out they were as wrong as you were before?

PS: Also, reducing 2SAT to 3SAT doesn't help proving that P=NP. The opposite reduction would, if you were able to do the reduction in polynomial time. But maybe I misunderstood something about what you attempted.

theGnuMe|2 years ago

What's your take on survey propagation? I would think that any P=NP proof would take that into account or be similar.

MPSimmons|2 years ago

>By the MAXSAT problem, we are given a set V of m variables and a collection C of n clauses over V. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is NP-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O(n2m3). Hence, we provide a proof of P = NP.

Talk about burying the lede.

jojohohanon|2 years ago

The first step is to encode some decent password cracking challenge into an NP complete problem - choose 3 SAT or something similarly well understood. Then encode that 3sat problem into the paper’s solution domain.

If you get a password in polynomial time you are golden.

e79|2 years ago

The contest approach wouldn’t necessarily hold rigor, because it doesn’t formally prove that all 2-MAXSAT problems can be solved using this algorithm. Just that one or more cherry-picked problems can. I think the paper really just needs to present actual proofs for the propositions it makes (as others have pointed out).

haliskerbas|2 years ago

One of you is going to turn this into a puzzle and ask me in my next interview. I won't know the answer, but please don't laugh at me when you realize!

geophile|2 years ago

P=NP alert

YeGoblynQueenne|2 years ago

Donald Knuth thinks P=NP, but even if we find a proof it won't be as useful as we think:

https://youtu.be/XDTOs8MgQfg

In any case, that someone says they proved P=NP is not such a big red flag as people seem to think.

pclmulqdq|2 years ago

I could believe that an O(n^5) algorithm, like what was presented here, can satisfy P==NP, but still maintain the status quo where P!=NP for all practical problems. That is honestly what matters most of the time.

Still, there's almost certainly a mistake in this paper.

comfypotato|2 years ago

Which means something’s probably wrong with the proof, right?

marcodiego|2 years ago

"Hence, we provide a proof of P = NP."

Now map a big, critical and difficult problem to the 2-Maxsat problem, solve it in polynomial time, get rich and come back to argue if it is really a proof of P = NP.

local_crmdgeon|2 years ago

Polynomial time can still be truly enormous

hota_mazi|2 years ago

> Hence, we provide a proof of P = NP.

Is this serious?