top | item 30644284

(no title)

cleancoder0 | 4 years ago

LKH does support a lot of things mentioned, but for practical usages it would not work. It's nice to leave it running and see what can be accomplished but asking it to give you back something in 1 second, with a lot of constraints, gives back solutions that are not feasible.

In the basic TSP there is a lot of data.

For example, the reason why minimum spanning tree works is because the algorithm makes use of the relationship between vertices. Similar techniques use alpha-nearness, Steiner trees and direct modifications of distance matrix to create relaxations of the TSP and improve the performance of local search (I believe most are implemented in LKH).

I am obviously not expecting NNs to be capable of doing something like that currently but I'm hoping they might be able to discover interesting instance patterns for something more constrained.

discuss

order

yobbo|4 years ago

> asking it to give you back something in 1 second, with a lot of constraints, gives back solutions that are not feasible.

Try to limit the search to only feasible solutions.

> the algorithm makes use of the relationship between vertices

But these do not stay they same between problem instances; anything you learn from solving one problem is not helpful when solving the next problem.

cleancoder0|4 years ago

> But these do not stay they same between problem instances; anything you learn from solving one problem is not helpful when solving the next problem.

But nothing in ML stays the same between instances. The reason why ML works is because there are redundancies in the training set. I am pretty sure that distribution wise, set of TSP instances still has a lot of redundancies.

You would want your model to learn to execute something like MST or to approximate alpha-nearness or to remap the instance into a relaxation that when solved by a simpler algorithm results in a solution that, when remapped back to original, is feasible and optimal.