top | item 31412110

AI and Machine Learning – The Basics

193 points| sblank | 3 years ago |steveblank.com

51 comments

order
[+] brutus1213|3 years ago|reply
Pretty high-level and broad (which I thought was good). Audience is clearly real-world decision makers as opposed to techies like the HN crowd. Some issues:

1) NLP performance characterized to be better than vision systems. I don't think that is true.

2) Some minor facts are not right. E.g. OpenAI did GPT-3, not Google

3) I expected the set of exemplars for technology to be better researched. Siri and Alexa are NOT recommendation engines. Similarly, lacked the best of class examples on many fronts. This was the biggest issue in the paper.

[+] sblank|3 years ago|reply
Help make the paper better. What are some better exemplars for technology?
[+] mark_l_watson|3 years ago|reply
That is really a good writeup, especially for providing background material for new people in the field (I have been in this field since 1982, experienced AI winters and boom times).

One thing missing in the article is the exponential growth rate of progress. The rate of progress is something I try to explain to non-tech friends. I love seeing difficult problems solved and then simply become new engineering tools to build with.

Transformer models like GPT-3 and Co-Pilot have so quickly transformed my work flow and especially GPT-3 has transformed things that I can do.

[+] ZeroGravitas|3 years ago|reply
Is it helpful to anthropomorphise like this:

> An AI can see and understand what it sees. It can identify and detect an object or a feature in an image or video. It can even identify faces.

Why do we switch to this kind of language when we wouldn't for a standard algorithm.

You might say casually "the algorithm has a bug so it doesn't recognize barcodes with a 0 in them" or this door opens when it recognizes a human but it's still very clear that it's not thinking whereas AI people seem to intentionally blur this line to get headlines and attention.

Another example:

> It’s taken decades but as of today, on its simplest implementations, machine learning applications can do some tasks better and/or faster than humans

You wouldn't say this about a pocket calculator, even though it's true. Why say it here?

[+] mdp2021|3 years ago|reply
> You wouldn't say [a pocket calculator can calculate better and faster than humans]

Would. ("Better" is legitimate for "more precisely".)

In fact, to an inquiry, not only «An AI can see and understand what it sees. It can identify and detect» can be read through usual duly interpretation of text as a typical imprecision for «An AI can see and "understand" what it sees: it can identify and detect» (the full stop stands for a colon), where rhetoric 'understand' is explained immediately after, but also the term "understand" is not that necessarily bound to Intelligence: it means "to have entered into a relation, to create a relation" (that 'under-' is a case of 'inter-'), hence "to approach" - "understand" is generally legitimate for progressive (for only progressive) definition of a concept. The use of "understand" for Intelligence must be some elision of "properly, duly understand", proportionally to what is achievable to a human. Which also implies that a more limited entity can "understand" to the best of its nature.

This noted: little problem as per the «barcode», since «it's still very clear».

Surely, if _some_ «people seem to intentionally blur this line», we can censor them.

But surely again, the formulation «whereas AI people seem to intentionally blur this line» is just plain offensive, and the addition «to get headlines and attention» is well over offensive. You have to add the quantifier, "whereas _some_ AI people" - otherwise typicality (beyond statistics: qualification) is implied.

[+] seibelj|3 years ago|reply
I would still say AI is supremely hyped, and its usage remains in niches. Somehow it still takes thousands of engineers to run Twitter. Not sure how recommendation engine improvements are going to turn the world upside down. I’ve been promised that self driving cars are coming in 6 months for 15 years now, in a perpetual shifting window.
[+] ChefboyOG|3 years ago|reply
In the course of a normal day, an average person might interact with a dozen different ML-powered apps just using their iPhone.

- Uber/Google Maps/Waze: ETA prediction

- Gmail: Smart Compose & spam filtering

- Instagram/Snapchat/Any camera app: Computer vision

- Siri/Google Assistant: Speech-to-text

- FaceID: Facial recognition

- Facebook/Netflix/All content aggregators: Recommendation engines

- Any banking app: Fraud detection

ML's use is extremely widespread at this point. The above list is just a tiny snapshot. "AI" is term thrown around by marketers and hypemen all the time, no arguments there, but ML's usage is anything but niche these days.

[+] mattlondon|3 years ago|reply
I mostly agree with you - there is a lot of "something something machine learning!" going on without any concrete use-cases that it enables that wasn't possible before.

I think it will just be another tool available and we'll slowly see it creep into places without making earth shattering improvements. An example that comes to mind is IDE auto-complete that GitHub and others are doing - better than what we had before, but not exactly revolutionary

Apart from within academia and niche places like deepmind, I get the feeling that ML roles and positions will become the new DBA roles of the past - ultimately super-dull jobs where you are just shuffling around storage space, migrating data, or provisioning new tables/models etc without actually adding any business value or getting involved in any of the user-facing work etc.

It is already starting to get heavily commoditized with downloadable pretrained models etc... I see a lot of interns etc who want to come solve cancer with machine learning. I hope that there are not too many youngsters better their career on making it big in machine learning (unless as a researcher) because I genuinely feel like machine learning will just be some library/black box that 99.9% of the time will just be a downloadable pre-trained model that you add like you would if you needed to add OCR support to your product.

I am sure there will be some exciting new things that we'll see of course, but I think they'll be step-wise improvements, rather than huge leaps that open up entirely new worlds of opportunities. E.g. CNNs made image recognition much better (and anyone can now download a really powerful pre-trained model and beat state-of-the-art from just a few years prior etc), but it only made image recognition better - we could do it before, just not as well.

[+] bitsnbytes|3 years ago|reply
AI has definitely been bastardized and hyped by the marketing departments. Interesting enough recently within the last month IBM has dropped the AI from one of their products.

The sales/pr guy wasn't even sure why or not willing to disclose the why. I wonder if it had to do with unions. However that is total speculations based on an experience with gov't and unions approximately 20 years ago where the unionw was very concerned with computer and software I had written taking over union jobs.

[+] euphetar|3 years ago|reply
Rule of thumb: if you see a galaxy-brain style graphic next to anything "AI" or "Machine Learning" then the content is probably not worth your time.

This article seems to be an exception to the rule

[+] petilon|3 years ago|reply
So Artificial Intelligence is a superset of Machine Learning. What are some AI algorithms that are still in use, that is not Machine Learning? Is there anything?
[+] paradite|3 years ago|reply
- Sufficiently complex "if else statements" (Decision tree)

- Expert system / Rule-based system (CLIPS https://clipsrules.net/)

- Knowledge-based system

- Rational agent planning that doesn't involve ML (search, heuristics, MDP, POMDP)

- NLP that doesn't involve ML (Markov model, etc)

If you Google AI taxonomy you can get a good understanding of it.

[+] jrumbut|3 years ago|reply
When I took an AI course in 2009 it was largely about game playing algorithms that use graph traversal (possibly guided by heuristics).

Stuff like this: https://en.m.wikipedia.org/wiki/A*_search_algorithm

Such algorithms are still a viable approach in many situations (and I'm sure A* is used in production somewhere today).

[+] redytedy|3 years ago|reply
Other replies are missing an explicit call-out to Reinforcement Learning. You can USE ML for RL, but the field itself is considered separate from ML and under AI in general.
[+] watersb|3 years ago|reply
Production-rule systems can do inference that isn't from unsupervised neural net training. It's just matching up the rules.

Probably not considered AI these days.

[+] isolli|3 years ago|reply
Optical character recognition, applied to scanning documents or automatically reading license plates and postal addresses.
[+] optimalsolver|3 years ago|reply
Symbolic regression/program synthesis/genetic programming.
[+] RandyRanderson|3 years ago|reply
I went to this site and all i saw was a Blank page.
[+] mdp2021|3 years ago|reply
You probably meant, a bit obliquely, to greet Mr. Blank (HN-ID sblank) and subtly thank him for compiling that document and be so kind as to immediately remember this community and submit the information.

https://steveblank.com/about/

We subscribe to the gratitude.

[+] sn41|3 years ago|reply
Must be from the school that believes that a "Tabula Rasa" is sufficient as the basics.
[+] mywaifuismeta|3 years ago|reply
Oh, THAT Steve Blank? Not someone I expected to ever write about AI!
[+] kamroot|3 years ago|reply
Disclaimer - I have started reading this but have not finished it completely. Having said that, it looks very exhaustive and well written and well worth my time to finish it. A bit of feedback to the OP about the page though - the left and right content areas introduce clutter, IMHO, and make it hard to remain focused. But thank you for putting together this resource.
[+] hoseja|3 years ago|reply
Frankly, I was turned off after the first couple paragraphs because it looked like some sort of government presentation for boomer generals. Will check out further.