top | item 39185953

(no title)

nkh | 2 years ago

Why would it need to replace them? From the article, they claim they have found a way to reduce the upperbound faster when searching large Integer problems. I don't see how that effects the current searching process. All of these solvers you can enter in an upperbound yourself if you have knowledge of the problem and know a previous solution. So it seems if this is just a programmatic way of reducing the upper bound, it should fit right in with current approaches. What am I missing?

discuss

order

whatyesaid|2 years ago

It's a research paper. You can write a theoretical paper and let others apply it practically, which others can figure out the practical aspect and report results of benchmarks, or others can also build on the theory.

This paper only has 2 authors. The other solvers are probably applying technique specific tricks and speedups, and you're working with approximate optimization, it's not that easy to move everything over.

hdesh|2 years ago

> This paper only has 2 authors.

So? I don't get the relevance of the author count.

unnah|2 years ago

I don't think they're talking about a bound for the optimum objective value, but a theoretical upper bound for a covering radius related to a convex body and a lattice. The bound would be useful in a lattice-based algorithm for integer linear programming. I don't think there exists an implementation of a lattice algorithm that is practical for non-toy integer linear programming problems, let alone one that is competitive with commercial ILP solvers.

math_dandy|2 years ago

Every time an integer feasible point is found during the iterative process these algorithms use (branch and bound), you get a new upper bound on the global minimum. It’s not clear to me how these dynamically generated upper bounds highly specific to the particular problem relate to the upper bounds of a more general nature that R&R produce.

nkh|2 years ago

> upper bounds of a more general nature that R&R produce

If it's an upper bound, it should be pretty easy to plug into the existing stuff under the hood in these solvers. Can you provide my insight into how the R&R "Upper bound" is different and "more general in nature"?

luca3v|2 years ago

They prove a new upper bound to a combinatorial quantity that controls the worst-case running time of an algorithm of Dadush, not an upper bound to the optimal value of a given ILP instance.

If they wanted to see their ideas work in practice, they could implement Dadush's algorithm in light of these new bounds, but this would be unlikely to outperform something like CPLEX or Gurobi with all their heuristics and engineering optimizations developed over decades.

Otherwise, and this is the sense of the quoted sentence, they could go deep into the bowels of CPLEX or Gurobi to see if their ideas could yield some new speed-up on top of all the existing tricks, but this is not something that makes sense for the authors to do, though maybe someone else should.

raverbashing|2 years ago

Honestly?

The search for the 'exactly optimal solution' is way overrated

I think you can get a moderately efficient solution using heuristics at 1/10 of the time or less

Not to mention developer time and trying to figure out which constraints make your problem infeasible. Especially as they get more complicated because you want to make everything linear

yawgmoth|2 years ago

I agree, especially when considering that a model is also not reality.

However, what folks often do is find a Linear Solution quickly, then optimize on the Integer Solution, which gives you a gap that you can use to choose termination.

7thaccount|2 years ago

The vast majority of the United States power grid (many thousands of power plants) are optimized in auctions every hour for the next day and every 5 minutes on the operating day. Finding the globally optimal solution is pretty important for both fairness and not wasting billions of dollars each year. I'd agree with you for a lot of problems though, but keep in mind there are plenty where they need full optimality or within a tiny percentage from it.