(no title)
friendzis | 13 days ago
Deming requires quite a bit of knowledge and understanding in failure/success modes. The core tenet of Deming is that every output is a result of some process and, therefore, output is controlled by controlling* the process itself. Look at your process and tackle failure modes in this priority list.
Drucker, on the other hand, puts the process under the fog of war and basically says deploy pressure on process outputs and let the process adjust itself. It requires much less understanding behind the processes to make sense.
* - Process control in Deming is mostly about variability.
bryanrasmussen|13 days ago
asplake|13 days ago
Worth adding that Deming (after Shewhart) recognised two kinds of variation: special cause (specific the work item in question) and common cause (an artifact of the process). That knowledge work involves a lot more of the former than does manufacturing does not excuse inattention to the latter.
baxtr|13 days ago
esafak|13 days ago
Think Stats: Probability and Statistics for Programmers (https://allendowney.github.io/ThinkStats/)
Computer Age Statistical Inference (https://hastie.su.domains/CASI/)
Statistical Rethinking (https://xcelab.net/rm/)
An Introduction to Statistical Learning (https://www.statlearning.com/)
Obviously maths is going to be involved to do the subject justice. These recommendations are more about applied statistics, but that's the foundation. From there it is a small transition to statistical process control.
kqr|13 days ago