(no title)
galimaufry | 4 years ago
I think this looks like a bigger problem specifically because you are in AutoML.
Suppose you are training a GAN. There's notoriously a certain amount of luck involved in traditional GAN training, because you need the adversary and the generator to balance each other just right. So people try many times until they succeed. Probably they were not even recording each attempt, so they do not report how many times they had to run before getting good results.
From an AutoML point of view, this is BS work - the training procedure cannot be automated, and (apart from using the actual seeds) the work cannot be reproduced.
But from the point of view of everyone else, maybe it is fine. They get a generator model at the end, it works, other people can run it.
pnt12|4 years ago
I think from a practical perspective, it is fine. You want results and you have a black box algorithm that produces them, fine.
From an academic perspective, AI research is a mess. The reason you try something is not from a logical theory, but due from a "hunch" or replicatinga similar algorithm applied in a parallel area. If it does not work, you change some parameters and run it more times. Still not working so maybe you extend the network to include some more inputs and hope for better results.
I did my thesis in machine learning and was very disappointed with the state of the field.
caddemon|4 years ago