(no title)
hapanin | 4 years ago
I'm currently working as a machine learning engineer, but doing a lot more engineering than learning. I'm also applying to PhD programs with focuses in knowledge graphs and common sense reasoning, but willing to learn any other field. If I don't get in then some independent research is still a good career move.
I have ideas of my own (goal oriented question asking) but happy to run support on other peoples stuff.
fxtentacle|4 years ago
1. I have a complete optical flow training data set here. It is built specifically to highlight the shortcomings of some state of the art AI algorithms and (as expected) they fail on it. We used it internally to fix those issues on our own proprietary AI. To turn this dataset into a publication, someone would now need to get access to or re-implement most of the top algorithms on other benchmarks like Sintel and run them against the new dataset. Otherwise, there would be no comparison scores and, hence, no use for someone to evaluate their algorithm on the new data set.
2. By participating in the gocoder Bomberland AI competition, I noticed that most state of the art RL AI algorithms fail badly in an environment with a non-deterministic enemy. It would probably be very useful to package that environment as a OpenAI gym python package and then do a proper "best out of 3" evaluation of common algorithms on that environment. It'll mostly be failures, which is a great backdrop against which to propose a small tweak to DQN that makes things work, which is to estimate the variance in addition to the mean so that you can work on the 30% percentile of the expectation of the state action score.
Both dataset papers and AI algorithm ranking papers tend to get cited a lot. But they require a lot of effort to produce.
hapanin|4 years ago
dummyvariable|4 years ago
hapanin|4 years ago
vaibkv|4 years ago
hapanin|4 years ago
randomcatuser|4 years ago
hapanin|4 years ago