(no title)
cleancoder0 | 4 years ago
There's a lot of companies that want to provide an Uber/Lyft-like service of their own product. So you have a bunch of smaller problems that you want to solve as best as possible in ~1 second.
A lot of small companies with their delivery fleets want to optimize (pest control, christmas tree delivery, cleaning, technical service, construction (coordinating teams that construct multiple things at multiple locations at the same time) etc.).
On the other hand, not related to TSP, the whole energy market in the US is very LP/ILP optimizable and has a lot of customers (charging home batteries, car batteries, discharging when price is high, etc.).
I would admit that the scientific field of discrete optimization is littered with genetic algorithms, ant colonies and other "no free lunch" optimization algorithms that make very little sense from progress perspective, so it does feel like the golden era was from the 70s to early 90s. I do not have a PhD but somehow ended up doing machine learning and discrete optimization most of my career.
feanaro|4 years ago
cleancoder0|4 years ago
If you look at the roots of discrete optimization, all of the approaches used in a solver like Concorde (developed in the open), there's no where near the amount of development and paradigm shifts happening in ant colonies, genetic algorithms, genetic programming, tabu search, annealing and similar.
E.g., finding an efficient representation of time-windows+pickup-and-delivery+breaks+flexible-start-time that allows you to efficiently update the solution and get an answer if the solution is feasible after the change and what the new cost is, is more progress than changing some recombination patterns in your genetic algorithm that will result in improvement on the instance set you are optimizing for (basically overfitting to data).
Here's an example of a paper that lists various update/feasibility/cost diff algorithms and their time complexity for a bunch of subproblems on a list-of-location-ids representation. Any genetic algorithm that wants to be fast will need to deal with that too.
https://www.researchgate.net/profile/Thibaut-Vidal/publicati...
That's why I think that graph NNs might allow us to find a way to remap our simple representation to something more efficient that is easier to work with and that can apply local changes without much effort.
For example, what if you can apply a transformation to TSP problem with time windows by adding new vertices or changing the distance matrix to eliminate time windows completely but still keep applying very efficient local changes that bring you close to optimum fast (do the same for pickup-and-delivery, flexible start time, etc.). Similar thing, an integer linear programming solver is used but the way the constraints of your instance are defined is hard to work with, there is a local pattern you are not seeing that allows simplification.
There have been attempts to learn exploration strategies of ILP solvers with ML but none made leaps forward (unlike AlphaFold, AlphaZero, AlphaGo, or even AlphaCode - competitive programming code generation). The biggest reason for that is that the current principled algorithms (60-30 years old) are insanely good on fundamental problems.
I remember reading about a new set of constraints, nurse rostering (nurse scheduling), and once researchers applied the principled methods, all of the instances of interest got solved to proved optimality. The amount of genetic algorithms, ant colonies and who knows what that was applied to these instances in the meanwhile was ridiculous and unnecessary.