top | item 37435201

(no title)

sfcarrot | 2 years ago

- The general idea of providing guiding prompt + scoring for better objective values is interesting. Though I doubt how this scale since it requires a lot of guiding/customization towards different/bigger problem, but I’d love to think further on it.

- Maybe let LLM help explain it’s thinking process/logic to help improve existing algorithms (rather than using it as a standalone-optimizer). I once did that for an allocation problem - and it was able to show a basic algorithm for a feasible solution.

- A essential topic in optimization is about proving optimality, maybe having AI providing insights on proving could also be cool.

- Author compared their algorithm with heuristics on randomly generated TSP problems (why not TSPLIB), the claim is that LLM can do better than heuristics on small problems. They showed an interesting metric on # of success suggesting we might need to sample multiple LLM runs for a good results.

- One big question I did not find an answer is how they replicate the runs given the stochastic nature of LLMs. Even with a zero temperature, LLM is only relatively less random. This extends to many LLM-application papers and hence must be papers talking about it.

discuss

order

No comments yet.