It does a pretty good job of maintaining system responsiveness and latency when there's sustained memory pressure, at least much better than the simpler hysteresis loops that are commonly used for this sort of thing.
I'll be the first to admit that I hastily presumed you were conflating the control theory PID acronym with "process ID", then I looked at the code and had a genuine WT-actual-glorious-F moment.
It's an OK explanation of a PID controller. They're easy to code; tuning is the hard part. There are lots of approaches to tuning. This author offers a simple one.
I note that he cites "James Clerk Maxwell: “On Governors”. Proceedings of the Royal Society, #100,1868".[1]
Yes, 1868. Yes, that Maxwell, of Maxwell's Equations. That paper is worth reading. Right there, linear control theory was born. And that's about where the state of the art was until 1950 or so.
Modern control theory is closely tied to machine learning. Except that the control theory people want solutions that don't suddenly do funny stuff for some inputs. The math is way beyond me. Even control theory PhDs have trouble now.
Modern control theory (not counting RL, which is rarely, if ever, used on live controllers) is essentially two camps: PID with tuning and MPC (model predictive control). There is some optimal control theory not in either of these camps, but it’s not the norm.
The latter is often studied under the umbrella of optimization theory (convex optimization mostly), but most aspects of it are well understood for most systems we care about (with some notable exceptions, of course). I wouldn’t quite say that “even control theory PhDs have trouble now” as there is plenty of work done on the field and many cases are quite useful (but they certainly have many topics to choose from for their theses!) :)
I think there are some important steps between Maxwell and 1950 that are worthy of mention. Black's paper on negative feedback amplifiers, for one; the invention of the PID controller itself, for another...
One of my coworkers (colmmacc@ on twitter) has been giving talks [1][2] for years on how PID loops and control systems theory apply to software, and that I’m totally convinced that it’s an undermined vein of excellent thought. I’m working on applying this to a piece of software I’m building and I’m very excited.
He’s not wrong, but the fundamental problem is that control theory has a very limited view of “correct behavior”: stability around a desired point or trajectory. If you can define your desired system state as a fixed or time varying value that can be determined ahead of time, and you would like to prove that your system stays close to that point for some definition of “close”, then this tool may be for you.
But if you can’t, it’s much harder to see how to apply it. And a lot of software problems can’t be defined in such strictly mathematical terms.
Also, if your system is linear or approximately linear then we have good tools. If your system is really honestly nonlinear it gets more dicey. You have to either hope that somebody has already derived a controller for systems of the class you are using, or you have to do novel numerical analysis in order to derive a controller that will stabilize it.
I know little about web services orchestration, but I feel that a lot of algorithms from control theory / signal processing / optimization / continuous math / etc will apply to software contexts if one “squints right”. And such solutions will often end up being more robust and better behaved (and arguably more intuitive) — an ounce of math often saves a thousand lines of clumsy code. As Peter Naur argued, writing software is primarily about building a functional theory of the domain.
I do have all the material from "PID Without a PhD" in there -- just watch the three videos with "PID" in the title.
I also recommend Brian Douglas's channel: https://www.youtube.com/user/ControlLectures if for no other reason than because I haven't posted a new video for three years -- but then, he's been posting on some Matlab channel, so you'll want to start with his and then move to that one.
What would you recommend if the feedback is noisy enough that a derivative term would be mostly noise, but using a simple FIR filter to remove the noise introduces a delay which makes the system less responsive?
This is a well explained and concise guide on theory and implementation behind it.
But like many things, there's no replacement for getting your hands on a real life system and gaining real life intuition.
The best way to learn this? Get/make a test bench with safe-ish motor and encoder system. Play with values and measure at the performance in the system. Increasing the P value gains, makes the system "stiffer" but at the expense of stability. Put your hand on the flywheel and "fight it" safely. You can feel the I (Integral) value winding up against your steady state errors. It's quite magical.
As you start to chase the highest performance for your system, you start to think about alternative control strategies. Eventually you will dream up the concept of feedforward control loop. Try to automate the the picking of PID values, then you start to learn about Ziegler–Nichols tuning method. Gain Scheduling. Non Linear Control theory. It never stops.
(If you do any of this, please make sure the motor/flywheel won't kill you if it goes unstable, and please wire a E-stop that's easy to access when it's unstable)
This is how I teach middle school and high school students basic controls. They implement and tune a proportional controller, then the derivative part, and then I give them a feedforward model for the plant (usually a brushed DC motor.) The PD controller and feedforward is very easy to write and is quite approachable once you see how simple it is, and it can be very exciting to see a motor “do as you command” before your eyes.
I avoid integral control because it’s less effective and much harder to deal with than good feedforward. Most of the mechanisms we use (in the FIRST Robotics context) have a feedforward model and good system-identification tools so this works out well.
Real life systems don't have to be physical. You can do a lot by playing around with digital PID controllers, graphing the sensor value over time and seeing how the graph changes as a response to changes in the gains. The benefit of this approach is that anyone with some programming skills can do it right away without any hardware.
I designed control systems for brushless servo motors and more recently did the power control for very power hungry ASICs of a very well known company.
Tuning is important but when you come to design a control system, you need to look at the overall system - and this is something textbooks don't really tell you. Control systems are all about reacting quickly enough and accurately enough. Delay and poor incoming/outgoing signal quality are the enemies of the control system:
1. The delay from sensor to controller (e.g. SPI bus incurred delay between your gyro sensor and your processor)
2. The bandwidth of each component. If your gyro sensor is only 10hz bandwidth, it'd be difficult to control anything reasonably fast. Note that bandwidth is often given in frequency to 3dB attenuation (half the amplitude) - that's not enough, you need to understand the phase distortion: in other words, as you approach the bandwidth limit, does your signal get distorted in time? because that's very bad for a control system.
3. The resolution of your actuator - if you end up using an on-off actuator, it'd be impossible to do any precision control. Do you want to position a motor accurately? Make sure you use more than 8-bit PWM.
4. Static/dynamic response: things are usually not nicely linear. Things at standstill take a different amount of effort to move than once they are in motion.
My experience has shown time and time again that if you get the system fundamentals correct, implementing and tuning the control loop is reasonably trivial. The secret is having a system that's an order of magnitude faster (higher bandwidth) than what you're trying to control.
You really need to think about control systems in the frequency domain: "what I'm controlling requires the bandwidth of X, I need to sample with with n * X, my processor is this fast, my output signal is that accurate, this is the end to end delay".
Someone once said "any non-linear system becomes linear when you sample it quickly enough". I don't know who said it or whether it's even remotely true for all parts of life - for control systems though, it's definitely a rule to go by.
It's called "PID Without a PhD" but it still assumes at least an undergrad level training.
I wish it had been around when I was studying EE-2-Control. That was not one of my favourite courses but is one of the bits of theory that I have reached for the most since and this guide has been a very useful reminder of much of it.
'PID Control The "PID" in "PID Control" stands for "Proportional, Integral, Derivative". These three terms describe the basic elements of a PID controller.'
Classic case of a really simple concept being obfuscated by jargon. If you've done any sort of game development and implemented smooth movement and animations, you've probably accidentally implemented a PID controller.
My prof spent 8 whole months teaching us systems modeling and PID mathematics only to tell us there's no reasonable way to set your PID values... You gotta guess and check!
There are various structured ways to set PID gains.
The mentioned use of optimal control is one, although if you go there you need to be careful about setting your costs -- LQR and other optimal schemes assume a perfect model of the plant; the higher you set Q the higher the probability that your system will ring or go unstable.
There's various robust control methods, all of which I haven't used in a Good Long While, because for the most part swept-sine measurements work nice.
The one I've used most involves taking swept-sine measurements to get the plant response in Bode plot form, then using either Bode or Nyquist plots (or both) to tune my PID (or whatever controller I'm using).
For a large class of industrial problems, swept-sine measurements won't do, either because measurements need to be undertaken on production lines that are in operation, and the operators get cranky about things like noticeable sinusoidal variations in the product (think aluminum foil or paper), or because even when operated within safe limits, large machines undergoing swept-sine measurements can be downright scary. In such cases one usually ends up using random excitation or steps (often called "bump testing" if you're in an oil refinery or a paper plant) and some sort of system identification step like ARMA.
If you do end up doing testing followed by system ID, you'll most likely get an approximate plant transfer function -- so it's wise to either use a grain of salt when doing your optimal design, or to use some robust design method or other.
What? If you have a good model of your system then you can (relatively easily) turn it into an optimization problem and use a LQR to choose gains for you. It’s true, this method still gives you knobs you might have to tune (Q and R), but these are easily conceptualized because they slide the cost of state excursions and control effort along a Pareto boundary. That means that you can get optimal PID gains just by saying how much you penalize a state excursion vs. how much you penalize control effort (e.g. distance from your reference point v.s. fuel use, or something like that.)
It’s also certainly true that a LQR won’t help you if you have no a priori knowledge of your system, but for many of the mechanisms we need to control this isn’t a problem.
Huh? They must have been joking. There are many cases with models that are good enough that you can calculate the gains depending on the desired response. I have done it many times.
This appears to be a really nice series of articles, but wow, what an annoying format. Just give me a .PDF or single .html page, don't make me click 'Next' thirty times. :(
He's not even doing that to show ads. He's just doing it because, hey, it's not like he wants to read it.
In some cases especially hardware, it is essential that PID tuning is done safely.
I've implemented this reinforcement learning algorithm in C++ for safely tuning a PID controller on a hardware system with successful results i.e. has been successfully deployed at customer sites for in-situ tuning.
I assembled a controller for my smoker from a cheap PID from aliexpress, a relay and a fan. It works like a charm and it's an order of magnitude cheaper than buying a complete solution.
This guide is good at going in depth without being too heavy on the jargon. I've found that it's still a bit much for younger students learning about the concept and related math for the first time. George Gillard's intro to PID [1] has been a staple of VEX robotics instruction because it fills this niche -- it's an excellent resource for teaching high school students and younger.
I read this in a magazine years ago, super helpful. I learned the most when I sat down and worked though it by hand. In practice, I've never needed D, PI has been enough to do the job.
In the typical pedagogy, D is only necessary if you feel the PI isn’t responding fast enough (for rapid control). D, being very sensitive to high frequencies, helps with that.
I'm currently in Georgia Techs OMSCS program and taking Sebastian Thrun's AI for Robotics class on Udacity (free to access). I thought the Lesson on PID controller was really solid. Here's the first lecture.
https://www.youtube.com/watch?v=-8w0prceask&feature=emb_logo
[+] [-] markjdb|6 years ago|reply
It does a pretty good job of maintaining system responsiveness and latency when there's sustained memory pressure, at least much better than the simpler hysteresis loops that are commonly used for this sort of thing.
[+] [-] metaphor|6 years ago|reply
[+] [-] Animats|6 years ago|reply
I note that he cites "James Clerk Maxwell: “On Governors”. Proceedings of the Royal Society, #100,1868".[1]
Yes, 1868. Yes, that Maxwell, of Maxwell's Equations. That paper is worth reading. Right there, linear control theory was born. And that's about where the state of the art was until 1950 or so.
Modern control theory is closely tied to machine learning. Except that the control theory people want solutions that don't suddenly do funny stuff for some inputs. The math is way beyond me. Even control theory PhDs have trouble now.
[1] https://www.maths.ed.ac.uk/~v1ranick/papers/maxwell1.pdf
[+] [-] LolWolf|6 years ago|reply
The latter is often studied under the umbrella of optimization theory (convex optimization mostly), but most aspects of it are well understood for most systems we care about (with some notable exceptions, of course). I wouldn’t quite say that “even control theory PhDs have trouble now” as there is plenty of work done on the field and many cases are quite useful (but they certainly have many topics to choose from for their theses!) :)
[+] [-] jbay808|6 years ago|reply
[+] [-] senderista|6 years ago|reply
[+] [-] ragona|6 years ago|reply
1: https://www.infoq.com/presentations/pid-loops/
2: https://m.youtube.com/watch?v=O8xLxNje30M
[+] [-] GlenTheMachine|6 years ago|reply
He’s not wrong, but the fundamental problem is that control theory has a very limited view of “correct behavior”: stability around a desired point or trajectory. If you can define your desired system state as a fixed or time varying value that can be determined ahead of time, and you would like to prove that your system stays close to that point for some definition of “close”, then this tool may be for you.
But if you can’t, it’s much harder to see how to apply it. And a lot of software problems can’t be defined in such strictly mathematical terms.
Also, if your system is linear or approximately linear then we have good tools. If your system is really honestly nonlinear it gets more dicey. You have to either hope that somebody has already derived a controller for systems of the class you are using, or you have to do novel numerical analysis in order to derive a controller that will stabilize it.
[+] [-] ssivark|6 years ago|reply
[+] [-] signa11|6 years ago|reply
[+] [-] mdn0420|6 years ago|reply
[+] [-] TimWescott|6 years ago|reply
I do have all the material from "PID Without a PhD" in there -- just watch the three videos with "PID" in the title.
I also recommend Brian Douglas's channel: https://www.youtube.com/user/ControlLectures if for no other reason than because I haven't posted a new video for three years -- but then, he's been posting on some Matlab channel, so you'll want to start with his and then move to that one.
[+] [-] michaelt|6 years ago|reply
What would you recommend if the feedback is noisy enough that a derivative term would be mostly noise, but using a simple FIR filter to remove the noise introduces a delay which makes the system less responsive?
[+] [-] leon_sbt|6 years ago|reply
But like many things, there's no replacement for getting your hands on a real life system and gaining real life intuition.
The best way to learn this? Get/make a test bench with safe-ish motor and encoder system. Play with values and measure at the performance in the system. Increasing the P value gains, makes the system "stiffer" but at the expense of stability. Put your hand on the flywheel and "fight it" safely. You can feel the I (Integral) value winding up against your steady state errors. It's quite magical.
As you start to chase the highest performance for your system, you start to think about alternative control strategies. Eventually you will dream up the concept of feedforward control loop. Try to automate the the picking of PID values, then you start to learn about Ziegler–Nichols tuning method. Gain Scheduling. Non Linear Control theory. It never stops.
(If you do any of this, please make sure the motor/flywheel won't kill you if it goes unstable, and please wire a E-stop that's easy to access when it's unstable)
[+] [-] pietroglyph|6 years ago|reply
I avoid integral control because it’s less effective and much harder to deal with than good feedforward. Most of the mechanisms we use (in the FIRST Robotics context) have a feedforward model and good system-identification tools so this works out well.
[+] [-] computerex|6 years ago|reply
[+] [-] flyinglizard|6 years ago|reply
Tuning is important but when you come to design a control system, you need to look at the overall system - and this is something textbooks don't really tell you. Control systems are all about reacting quickly enough and accurately enough. Delay and poor incoming/outgoing signal quality are the enemies of the control system:
1. The delay from sensor to controller (e.g. SPI bus incurred delay between your gyro sensor and your processor)
2. The bandwidth of each component. If your gyro sensor is only 10hz bandwidth, it'd be difficult to control anything reasonably fast. Note that bandwidth is often given in frequency to 3dB attenuation (half the amplitude) - that's not enough, you need to understand the phase distortion: in other words, as you approach the bandwidth limit, does your signal get distorted in time? because that's very bad for a control system.
3. The resolution of your actuator - if you end up using an on-off actuator, it'd be impossible to do any precision control. Do you want to position a motor accurately? Make sure you use more than 8-bit PWM.
4. Static/dynamic response: things are usually not nicely linear. Things at standstill take a different amount of effort to move than once they are in motion.
My experience has shown time and time again that if you get the system fundamentals correct, implementing and tuning the control loop is reasonably trivial. The secret is having a system that's an order of magnitude faster (higher bandwidth) than what you're trying to control.
You really need to think about control systems in the frequency domain: "what I'm controlling requires the bandwidth of X, I need to sample with with n * X, my processor is this fast, my output signal is that accurate, this is the end to end delay".
Someone once said "any non-linear system becomes linear when you sample it quickly enough". I don't know who said it or whether it's even remotely true for all parts of life - for control systems though, it's definitely a rule to go by.
[+] [-] andyjpb|6 years ago|reply
It's called "PID Without a PhD" but it still assumes at least an undergrad level training.
I wish it had been around when I was studying EE-2-Control. That was not one of my favourite courses but is one of the bits of theory that I have reached for the most since and this guide has been a very useful reminder of much of it.
[+] [-] babygoat|6 years ago|reply
[+] [-] daveguy|6 years ago|reply
http://brettbeauregard.com/blog/2011/04/improving-the-beginn...
[+] [-] themeiguoren|6 years ago|reply
[+] [-] harry8|6 years ago|reply
'PID Control The "PID" in "PID Control" stands for "Proportional, Integral, Derivative". These three terms describe the basic elements of a PID controller.'
[+] [-] moron4hire|6 years ago|reply
[+] [-] glouwbug|6 years ago|reply
[+] [-] TimWescott|6 years ago|reply
The mentioned use of optimal control is one, although if you go there you need to be careful about setting your costs -- LQR and other optimal schemes assume a perfect model of the plant; the higher you set Q the higher the probability that your system will ring or go unstable.
There's various robust control methods, all of which I haven't used in a Good Long While, because for the most part swept-sine measurements work nice.
The one I've used most involves taking swept-sine measurements to get the plant response in Bode plot form, then using either Bode or Nyquist plots (or both) to tune my PID (or whatever controller I'm using).
For a large class of industrial problems, swept-sine measurements won't do, either because measurements need to be undertaken on production lines that are in operation, and the operators get cranky about things like noticeable sinusoidal variations in the product (think aluminum foil or paper), or because even when operated within safe limits, large machines undergoing swept-sine measurements can be downright scary. In such cases one usually ends up using random excitation or steps (often called "bump testing" if you're in an oil refinery or a paper plant) and some sort of system identification step like ARMA.
If you do end up doing testing followed by system ID, you'll most likely get an approximate plant transfer function -- so it's wise to either use a grain of salt when doing your optimal design, or to use some robust design method or other.
[+] [-] pietroglyph|6 years ago|reply
It’s also certainly true that a LQR won’t help you if you have no a priori knowledge of your system, but for many of the mechanisms we need to control this isn’t a problem.
[+] [-] adamweld|6 years ago|reply
[+] [-] gnzoidberg|6 years ago|reply
[+] [-] basicplus2|6 years ago|reply
B. Adjust P for 1/4 wave damping , Adjust D to improve response , Adjust I to correct Offset
Repeat B. Repeat B. Repeat B. Repeat B. Repeat B. Repeat B. . . . .
[+] [-] carapace|6 years ago|reply
[+] [-] wycx|6 years ago|reply
[+] [-] CamperBob2|6 years ago|reply
He's not even doing that to show ads. He's just doing it because, hey, it's not like he wants to read it.
[+] [-] HeyLaughingBoy|6 years ago|reply
[+] [-] RhysU|6 years ago|reply
[+] [-] aguki|6 years ago|reply
I've implemented this reinforcement learning algorithm in C++ for safely tuning a PID controller on a hardware system with successful results i.e. has been successfully deployed at customer sites for in-situ tuning.
https://github.com/befelix/SafeOpt
http://papers.nips.cc/paper/6692-safe-model-based-reinforcem...
[+] [-] louwrentius|6 years ago|reply
It uses the temperature of the hottest drive to determine if it needs to spin up the fans or not. Works very very well.
The end result is an extremely quiet NAS even with 24 drives.
https://github.com/louwrentius/storagefancontrol
[+] [-] carlob|6 years ago|reply
[+] [-] JadeNB|6 years ago|reply
[+] [-] mehrdadn|6 years ago|reply
[+] [-] baylessj|6 years ago|reply
1. http://georgegillard.com/documents/2-introduction-to-pid-con...
[+] [-] annoyingnoob|6 years ago|reply
[+] [-] ssivark|6 years ago|reply
[+] [-] analog31|6 years ago|reply
[+] [-] dang|6 years ago|reply
[+] [-] data_ders|6 years ago|reply