When it comes to education online I prefer these, where they publish actual university course lectures and assignments than what you can find on Coursera and alike. There is still a big gap in the quality and depth.
RL is a good theoretical solution for personalization: given a user state, select an action that maximizes a long term reward (eg. revenue/engagement.) It’s tricky building the implementations because unlike Go/Chess/Atari it’s hard to simulate humans. So you have to train the agents with batches of data offline (ie. using historic data from the agent’s past actions.) This is challenging because you don’t get as many chances to try different hyper parameters. It’s starting to be used more in industry though.
I used to be a bit more excited about RL. I mean, it's still definitely something I have to learn, but one aspect of it _seems_ lacking to me and is messing with my motivation to learn it. I'm sure someone will happily explain all the ways I am ignorant.
It seems like there is a lot of emphasis on "direct RL" or whatever where they don't even really think about the model much, but it's I guess often inside of the policy or something?
But it seems to me as someone who has just started learning about robotics, that I absolutely need to first verify that I have an accurate model of the environment which I can inspect. It seems like a lot of RL approaches might not even be able to supply that.
I mean what I am stuck on as far as creating a robot (or virtual robot) is having a vision system that does all of the hard things I want. I feel like if I can detect edges and surfaces and shapes in 3D, parts of objects and objects, with orientation etc., and in a way I can display and manipulate it, that level of understanding will give me a firm base to build the rest of learning and planning on.
I know all of that is very hard. It seems like they must have tried that for awhile and then kind of gave up to head down the current direction of RL? Or just decided it wasn't important. I still think it's important.
You do not necessarily need to fully know the environment you are in, but you need to be able to evaluate how good the actions that you can take are in terms of an utility function. That’s how a RL algorithm can learn that going through a wall is a bad decision (reward(“ahead”) <= “$0“), and then decides for something else such as turning right or left (reward(“Left” || “right”) > “$0”).
I think the main problem with RL is deciding if an utility function — as precise as it may be — can fully capture/estimate all nuances of an environment. Another problem is at adapting to the environment by having new actions added dynamically into your model and having it to converge as quickly as possible.
One thing to keep in mind about direct (learn the policy/behavior) versus indirect (learn the model and then simulate behaviors on the model to choose the best) is that sometimes it's much easier to find a good enough policy than it is to learn an accurate enough model for simulation. Driving is a good example of this. Most of the time all you need to do is stay in your lane and obey the rules for intersections. A simulation of a driving environment, on the other hand, is quite difficult.
I watched this course and David Silver is a great lecturer, better than anybody else I’ve seen actually. I hope he does more publicly viewable courses in the future.
On one hand you're right, methods like Q-learning are model-free and do not necessarily encode much about state dynamics. The Q-function is a feature (function) of state and while ita may not say much about the model, it does encode the most important aspect of the model in terms of solving the task. Namely, it predicts the accumulated reward conditional on next actions actions. That makes it a somewhat narrow representation of state on its own. But, if you consider an environment that has many reward signals, and you learn Q functions for each, this ensemble of Q functions can consitute a rich representation of state. Depending on what the reward functions are, the associated Q functions may be sufficient to construct a full model. so I guess my point is that the learned quantities in RL encode key aspects of state, and when you expand beyond the single task/single reward RL setting the lines between value and model can become blurred.
I've read a bit about genetic algorithms or evolutionary computation at some point. Apparently it achieves good results as it can find discrete solutions for complex, well defined problems.
Reinforcement learning is something I know even less about. But from what I gathered it is also most successful in well defined problems and systems (such as games).
So my question is: How do they relate? Is there overlap and what are the most significant conceptual differences?
a big conceptual point in RL is the focus on the Bellman equation. value of a state equals immediate reward plus discounted future value. if you know the value of every state, just always move to pick the highest value.
well known methods like Q-learning are basically just iterative, approximate methods to find solutions to the Bellman equation — i.e. a measure of value for every state of the world, such that the Bellman equation is satisfied.
policy optimization methods don’t do this, but there are still mathematical connections back to the Bellman equation (there is a duality relationship between value functions and policies).
I would say this focus is a big part of what makes the field of RL unique.
Hmm... One way that I look at it is evolutionary computation is an optimization strategy. It's characterized by tracking a population of candidates, discarding the lowest scoring, mutating the survivors, and cross-combining elements from multiple candidates.
RL is an optimization domain. It's the name of the problem, not the solution. You can straightforwardly use evolutionary algorithms on RL problems. However, a lot of the recent success in RL has come from using deep learning to try to solve various RL problems, not from trying evolutionary computation.
Genetic algorithms randomly change things and then test to see if they're better. Reinforcement learning does analysis on past observations and then makes deliberate improvements.
I'm taking an adaptation of this class. My professor is simply reusing Silver's slides, so I'm watching the original lecture instead. Highly recommend!
That's one of RL's traditional formulations, yes. Bandits problems are another one. They've been generalized together into POMDPs partially observable Markov decision processes.
jointpdf|5 years ago
https://youtube.com/playlist?list=PLqYmG7hTraZBKeNJ-JE_eyJHZ...
hfkldjsjfkdj|5 years ago
jsemrau|5 years ago
patrick_halina|5 years ago
vojta_letal|5 years ago
ilaksh|5 years ago
It seems like there is a lot of emphasis on "direct RL" or whatever where they don't even really think about the model much, but it's I guess often inside of the policy or something?
But it seems to me as someone who has just started learning about robotics, that I absolutely need to first verify that I have an accurate model of the environment which I can inspect. It seems like a lot of RL approaches might not even be able to supply that.
I mean what I am stuck on as far as creating a robot (or virtual robot) is having a vision system that does all of the hard things I want. I feel like if I can detect edges and surfaces and shapes in 3D, parts of objects and objects, with orientation etc., and in a way I can display and manipulate it, that level of understanding will give me a firm base to build the rest of learning and planning on.
I know all of that is very hard. It seems like they must have tried that for awhile and then kind of gave up to head down the current direction of RL? Or just decided it wasn't important. I still think it's important.
bearzoo|5 years ago
in the case you haven't seen or read the following: https://bair.berkeley.edu/blog/2019/12/12/mbpo/
zekrioca|5 years ago
I think the main problem with RL is deciding if an utility function — as precise as it may be — can fully capture/estimate all nuances of an environment. Another problem is at adapting to the environment by having new actions added dynamically into your model and having it to converge as quickly as possible.
howlin|5 years ago
in3d|5 years ago
hideo7746|5 years ago
dgb23|5 years ago
I've read a bit about genetic algorithms or evolutionary computation at some point. Apparently it achieves good results as it can find discrete solutions for complex, well defined problems.
Reinforcement learning is something I know even less about. But from what I gathered it is also most successful in well defined problems and systems (such as games).
So my question is: How do they relate? Is there overlap and what are the most significant conceptual differences?
currymj|5 years ago
well known methods like Q-learning are basically just iterative, approximate methods to find solutions to the Bellman equation — i.e. a measure of value for every state of the world, such that the Bellman equation is satisfied.
policy optimization methods don’t do this, but there are still mathematical connections back to the Bellman equation (there is a duality relationship between value functions and policies).
I would say this focus is a big part of what makes the field of RL unique.
computerphage|5 years ago
RL is an optimization domain. It's the name of the problem, not the solution. You can straightforwardly use evolutionary algorithms on RL problems. However, a lot of the recent success in RL has come from using deep learning to try to solve various RL problems, not from trying evolutionary computation.
Buttons840|5 years ago
luplex|5 years ago
visarga|5 years ago
foobaw|5 years ago
ddon|5 years ago
https://www.youtube.com/watch?v=WXuK6gekU1Y
spicyramen|5 years ago
captn3m0|5 years ago
ilaksh|5 years ago
platz|5 years ago
computerphage|5 years ago