(no title)
cleancoder0 | 4 years ago
In the basic TSP there is a lot of data.
For example, the reason why minimum spanning tree works is because the algorithm makes use of the relationship between vertices. Similar techniques use alpha-nearness, Steiner trees and direct modifications of distance matrix to create relaxations of the TSP and improve the performance of local search (I believe most are implemented in LKH).
I am obviously not expecting NNs to be capable of doing something like that currently but I'm hoping they might be able to discover interesting instance patterns for something more constrained.
yobbo|4 years ago
Try to limit the search to only feasible solutions.
> the algorithm makes use of the relationship between vertices
But these do not stay they same between problem instances; anything you learn from solving one problem is not helpful when solving the next problem.
cleancoder0|4 years ago
But nothing in ML stays the same between instances. The reason why ML works is because there are redundancies in the training set. I am pretty sure that distribution wise, set of TSP instances still has a lot of redundancies.
You would want your model to learn to execute something like MST or to approximate alpha-nearness or to remap the instance into a relaxation that when solved by a simpler algorithm results in a solution that, when remapped back to original, is feasible and optimal.