top | item 30650211

(no title)

cleancoder0 | 4 years ago

> But these do not stay they same between problem instances; anything you learn from solving one problem is not helpful when solving the next problem.

But nothing in ML stays the same between instances. The reason why ML works is because there are redundancies in the training set. I am pretty sure that distribution wise, set of TSP instances still has a lot of redundancies.

You would want your model to learn to execute something like MST or to approximate alpha-nearness or to remap the instance into a relaxation that when solved by a simpler algorithm results in a solution that, when remapped back to original, is feasible and optimal.

discuss

order

No comments yet.