I'm thinking the current-generation AI (deep CNNs, not symbol-based reasoning) are enough to be paradigm-changers in the future by allowing us to record empiric knowledge.
Computers are good at recording structured knowledge - who owns that, this merch goes where, things that were recorded on stone tablets since times immemorial. But human experts also have a different type of knowledge, empiric, accumulated through years of experience. Artisan crafters "know" how good some material is by sense of touch, smell, sight, without always being able to say WHY or HOW this wood is better than that wood for this table. This is why apprenticeship with a master was a key part of developing an artisan - you'd transfer SOME of this empirical knowledge from a master that took a lifetime to develop it.
The humanity took a huge step forward by moving a lot of empirical knowledge to structured knowledge through the use of models (mathematical?) and books. Instead of an apprentinceship with a mason, a builder now goes to school and learns how to structurally design a building based on construction codes. This allows huge scaling of knowledge, at the expense of missing subtle details which are not modelled (spherical cows in a void, right?).
With this article it just occured to me that Deep Learning may be the tool to record this type of empirical knowledge. And it's going to scale out - unlike the human version never did - because digital copying is cheap, and the knowledge doesn't die with the artisan that developed it - it only gets more accurate. The models we build about reality will get more and more accurate - the spherical cows will grow legs in the air, not in a void.
Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
> Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
This paragraph scares me. I'm not convinced that technology getting good enough to where it doesn't need to be understood to be used is a good thing in most cases. Especially when it's a black box.
>Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
We have been using objects of the kind described in your last paragraph, called compilers since the 1950s, and with the increasing number of portability-focused high-performance DSLs / frameworks like tensorflow or OneAPI, we are only going further down this direction. But yet 70 years after the advent of compilers, there are still people who know how to open-up the machine, improve it, and fix it, and there probably always will be.
I don't see how machine learning, at least in its current non-AGI state, will be any different. It's just that your average end-user will have no idea how to "open up the machine", but that's also true for compiler technology today.
Your first paragraph is spot on. Some people are hell-bent on developing AGI (and these people should exist), and there are some doing SOTA-chasing by tweaking hyperparameters to the extreme and overfitting the data (these works are _mostly_ useless), but most people do not realize the amount of AI we have right now, right this day, is enough bring in fundamental and permanent paradigm changes in as many fields as you can think of.
And you are onto something when you say that "empirical records" can be learned by ML models, and a huge amount of grunt work is required for that. Every tom, harry, and dick can overfit to MNIST today, it used to be hard someday. And note the amount of grunt work it took to build MNIST- thousands of human hours. Building Datasets does not pay you back instantly, but it has benefits for all. I know some people who are paying money out of their own pockets and creating niche Datasets and making them available under MIT Lisence.
Hope that more and more companies and people do that for really niche fields.
And I don't completely agree with not understanding things that DL models do and relying on them. We have started to understand much of it. ML interpretation is a field with mot much success but I am hopeful. We did not know jack about how very deep CNNs work, but that changed with the Zeiler-Fergus paper[0]. Later with things like Grad-CAM[1]. Now we are trying to understand latent representation fully. We (even I) can create GAN generated pictures with different haircolor, different kinds of glasses, etc. from scratch.
I read not more than two days ago that Microsoft and Peking Uni researchers found a way to identify "knowledge neurons" in unsupervized pretrained embeddings in NLP and they can edit facts with that [2].
So, I am optimistic about our "interpretation" future.
> New AI tool calculates materials’ stress and strain based on photos
Surely not - more accurately:
New AI tool intuits materials’ stress and strain based on photos
or
New AI tool guesses materials’ stress and strain based on photos
An experienced engineer could probably also guess roughly what shape the stress and strain gradients might take in a shape, but you wouldn't call such a guess 'calculation'.
Guesses poorly too, based on the examples. The crack growth example is almost laughable. These are nowhere close to where real cracks would form. Cracks start on the inside corner of brittle joints.
The bar stretch is also completely wrong, there are no stress concentrations at the top and bottom; it should be uniform stress or concentrated strain in the center depending on what they're attempting to show.
Thank you for bringing sanity to the comment section. I'm getting really weary of all the "AI" snake oil (I'm a data scientist). There's no way that this will generalize to:
1. materials not in the training set
2. stresses substantially different from those in the training set
So basically, it just memorizes a few patterns and can interpolate between the ones it's seen before. Big deal.
A good rule of thumb for detecting this variety of bullshit is this: given infinite time and resources, would an intelligent enough human be able to perform the task given the input? In this case, the answer is probably no.
This can be useful if you use it to automate review of regular inspection photos - you might be able to get the computer to recognize strain that would be hard to detect without a strain gauge.
True, nothing stops me from wrapping a brick in a T-shirt and fooling the algorithm. Might be useful as a higher level classifer or bucketing heuristic.
Sorry, but I don't want a black box for stress testing bridges - or any other infrastructure for that matter. Do black boxes have their place in design? Sure. In validation? Absolutely not.
Then don't use it for that! Why so many naysayers dumping on interesting things. Engineers aren't idiots who're going to randomly rely on unknown techniques. Even traditional FEA is backed up by hand calculations.
It's not even really a black box. It's learned a subset of Abaqus's functionality so it's scope is known.
It doesn't appear to be stress testing anything, it's
[edit: removed calculating] estimating where a structure or material is stressed. Stress testing is a physical process, and the article gives no indication that stress testing could or would be replaced but this.
Edit: Further elaboration from the article: product designers, for example, could test the viability of their ideas before passing the project along to an engineering team
This would seem to clearly indicate that the intended application is faster iteration in design, not a replacement for a rigorous engineering process that would include appropriate testing.
True, it can be quite concerning, I know that PDE's have issues too but they have served very well over the years and will continue to do so. I think that there is no shortcut to having a solid understanding of the principles at play.
That's not the application. ML accelerated modeling allows for instantaneous iteration. It dramatically speeds up creative engineering and scientific work. You take the final product and run it through a classical simulator as a final QC - though in my experience (in a different domain, but similar principle) the ML model outputs tend to be smooth/continuous and ≈98% MSE accurate. Of course you need to carefully train your models to span the input space, but for finite element/finite difference modeling this is relatively straightforward.
>The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.
This is pretty fresh tech, but industry is already using it. We are doing something approximately similar where I work. Instead of running compute intensive finite element/finite difference simulations of physically dependent systems, a neural network (typically something structured like a transformer) is trained to output the calculations up to 6 orders of magnitude faster in our applications.
This changes this allows modeling dependent science and engineering solutions to be iterated over in real time - you can see the results of your edits as you manipulate your models. And the results, at least in our applications, have some ≈98 MSE. It isn't surprising in hindsight - deep neural networks are universal function approximations, and finite modeling is as close to pure mathematics as you can get in an industry setting. It feels like a perfect use case for deep neural nets.
[+] [-] ddalex|4 years ago|reply
Computers are good at recording structured knowledge - who owns that, this merch goes where, things that were recorded on stone tablets since times immemorial. But human experts also have a different type of knowledge, empiric, accumulated through years of experience. Artisan crafters "know" how good some material is by sense of touch, smell, sight, without always being able to say WHY or HOW this wood is better than that wood for this table. This is why apprenticeship with a master was a key part of developing an artisan - you'd transfer SOME of this empirical knowledge from a master that took a lifetime to develop it.
The humanity took a huge step forward by moving a lot of empirical knowledge to structured knowledge through the use of models (mathematical?) and books. Instead of an apprentinceship with a mason, a builder now goes to school and learns how to structurally design a building based on construction codes. This allows huge scaling of knowledge, at the expense of missing subtle details which are not modelled (spherical cows in a void, right?).
With this article it just occured to me that Deep Learning may be the tool to record this type of empirical knowledge. And it's going to scale out - unlike the human version never did - because digital copying is cheap, and the knowledge doesn't die with the artisan that developed it - it only gets more accurate. The models we build about reality will get more and more accurate - the spherical cows will grow legs in the air, not in a void.
Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
[+] [-] Sleepytime|4 years ago|reply
This paragraph scares me. I'm not convinced that technology getting good enough to where it doesn't need to be understood to be used is a good thing in most cases. Especially when it's a black box.
[+] [-] na85|4 years ago|reply
That sounds like a hellish dystopia.
[+] [-] moab|4 years ago|reply
I don't see how machine learning, at least in its current non-AGI state, will be any different. It's just that your average end-user will have no idea how to "open up the machine", but that's also true for compiler technology today.
[+] [-] truth_|4 years ago|reply
And you are onto something when you say that "empirical records" can be learned by ML models, and a huge amount of grunt work is required for that. Every tom, harry, and dick can overfit to MNIST today, it used to be hard someday. And note the amount of grunt work it took to build MNIST- thousands of human hours. Building Datasets does not pay you back instantly, but it has benefits for all. I know some people who are paying money out of their own pockets and creating niche Datasets and making them available under MIT Lisence.
Hope that more and more companies and people do that for really niche fields.
And I don't completely agree with not understanding things that DL models do and relying on them. We have started to understand much of it. ML interpretation is a field with mot much success but I am hopeful. We did not know jack about how very deep CNNs work, but that changed with the Zeiler-Fergus paper[0]. Later with things like Grad-CAM[1]. Now we are trying to understand latent representation fully. We (even I) can create GAN generated pictures with different haircolor, different kinds of glasses, etc. from scratch.
I read not more than two days ago that Microsoft and Peking Uni researchers found a way to identify "knowledge neurons" in unsupervized pretrained embeddings in NLP and they can edit facts with that [2].
So, I am optimistic about our "interpretation" future.
[0]: https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf
[1]: https://arxiv.org/pdf/1610.02391.pdf
[2]: https://arxiv.org/pdf/2104.08696.pdf
[+] [-] Anktionaer|4 years ago|reply
[+] [-] jameshart|4 years ago|reply
Surely not - more accurately:
New AI tool intuits materials’ stress and strain based on photos
or
New AI tool guesses materials’ stress and strain based on photos
An experienced engineer could probably also guess roughly what shape the stress and strain gradients might take in a shape, but you wouldn't call such a guess 'calculation'.
[+] [-] ew6082|4 years ago|reply
The bar stretch is also completely wrong, there are no stress concentrations at the top and bottom; it should be uniform stress or concentrated strain in the center depending on what they're attempting to show.
[+] [-] r-zip|4 years ago|reply
1. materials not in the training set
2. stresses substantially different from those in the training set
So basically, it just memorizes a few patterns and can interpolate between the ones it's seen before. Big deal.
A good rule of thumb for detecting this variety of bullshit is this: given infinite time and resources, would an intelligent enough human be able to perform the task given the input? In this case, the answer is probably no.
[+] [-] LegitShady|4 years ago|reply
This can be useful if you use it to automate review of regular inspection photos - you might be able to get the computer to recognize strain that would be hard to detect without a strain gauge.
[+] [-] anonytrary|4 years ago|reply
[+] [-] nestorD|4 years ago|reply
Seing the pictures in the article, the model produces very coarse results from very simple inputs.
[+] [-] mygoodaccount|4 years ago|reply
[+] [-] mlavin|4 years ago|reply
[+] [-] exporectomy|4 years ago|reply
It's not even really a black box. It's learned a subset of Abaqus's functionality so it's scope is known.
[+] [-] ineedasername|4 years ago|reply
Edit: Further elaboration from the article: product designers, for example, could test the viability of their ideas before passing the project along to an engineering team
This would seem to clearly indicate that the intended application is faster iteration in design, not a replacement for a rigorous engineering process that would include appropriate testing.
[+] [-] Guest42|4 years ago|reply
[+] [-] cartoonfoxes|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] elil17|4 years ago|reply
[+] [-] tryonenow|4 years ago|reply
[+] [-] ampdepolymerase|4 years ago|reply
[+] [-] tryonenow|4 years ago|reply
This is pretty fresh tech, but industry is already using it. We are doing something approximately similar where I work. Instead of running compute intensive finite element/finite difference simulations of physically dependent systems, a neural network (typically something structured like a transformer) is trained to output the calculations up to 6 orders of magnitude faster in our applications.
This changes this allows modeling dependent science and engineering solutions to be iterated over in real time - you can see the results of your edits as you manipulate your models. And the results, at least in our applications, have some ≈98 MSE. It isn't surprising in hindsight - deep neural networks are universal function approximations, and finite modeling is as close to pure mathematics as you can get in an industry setting. It feels like a perfect use case for deep neural nets.
[+] [-] Chris2048|4 years ago|reply