top | item 44596554

The Big Oops: Anatomy of a Thirty-Five-Year Mistake [video]

192 points| doruk101 | 7 months ago |youtube.com

91 comments

order

abetusk|7 months ago

I found this talk to be great. It goes through the history of OOP and how some of the ideas for the more modern ECS were embedded in the culture at the formation of OOP in the 1960s to 1980s but somehow weren't adopted.

It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.

One thing that really jumped out at me was his quote [0]:

> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]

I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.

[0] https://www.youtube.com/watch?v=wo84LFzx5nI&t=8284s

dkbrk|7 months ago

> I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.

I don't agree. While starting with the simplest case and expanding out is a valid problem-solving technique, it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case. It's a bit paradoxical, but a problem that be completely intractable if attacked directly can be trivial if approached with a sufficiently powerful abstraction. And our problem-solving abilities grow with our toolbox of ever more powerful and general abstractions.

Also, it's a general principle in engineering that the initial design decisions, the underlying assumptions underlying everything, is in itself the least expensive part of the process but have an outsized influence on the entire rest of the project. The civil engineer who halfway through the construction of his bridge discovers there is a flaw in his design is having a very bad day (and likely year). With software things are more flexible, so we can build our solution incrementally from a simpler case and swap bits out as our understanding of the problem changes; but even there, if we discover there is something wrong with our fundamental architectural decisions, with how we model the problem domain, we can't fix it just by rewriting some modules. That's something that can only be fixed by a complete rewrite, possibly even in a different language.

So while I don't agree with your absolute statement in general, I think it is especially wrong given the context of language design and system architecture. Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.

Mathnerd314|7 months ago

So, this is pretty difficult to test in a real-world environment, but I did a little LLM experiment. Two prompts, (A) "Implement a consensus algorithm for 3 nodes with 1 failure allowed." vs. (B) "Write a provably optimal distributed algorithm for Byzantine agreement in asynchronous networks with at least 1/3 malicious nodes". Prompt A generates a simple majority-vote approach and says "This code does not handle 'Byzantine' failures where nodes can act maliciously or send contradictory information." Prompt B generates "This is the simplified core consensus logic of the Practical Byzantine Fault Tolerance (PBFT) algorithm".

I would say, if you have to design a good consensus algorithm, PBFT is a much better starting point, and can indeed be scaled down. If you have to run something tomorrow, the majority-vote code probably runs as-is, but doesn't help you with the literature at all. It's essentially the iron triangle - good vs. cheap. In the talk the speaker was clearly aiming for quality above all else.

hgs3|7 months ago

> It goes through the history of OOP

Unfortunately, the "history" omits prototype-based OO (Self, Io, Lua, etc.) which doesn't suffer from many of the "issues" cited by the speaker.

vanderZwan|7 months ago

> I understand the context but this, in general, is abysmally bad advice.

The context, for the record, is inventing good general software architectures (and by extension generalized programming paradigms) for everyone to use. I agree with you that this is bad advice for generally fixing things, but for this context it absolutely makes sense to me. The hard problems are more likely to cover all the walls you'd bump into if you start from the oversimplified ones, so they are much better use-cases to battle-test ideas of what good architectures or programming paradigms are.

adamrezich|7 months ago

I know a two-and-a-half hour video is a hard sell for most people, but I found this talk to be absolutely fascinating. It's not yet another tired “let's all shit on OOP just for the sake of it”-type thing—instead, it's basically nothing but solid historical information (presented with evidence!) as to how “OOP”, as we now know it, came to be. The specific context in which these various decisions were made is something that nobody ever cares to teach, such that it's basically long-since forgotten today—yet here it is, in an easily-digestible format!

Jtsummers|7 months ago

Amusingly, an hour into the video he complains about information being hidden behind hours of video. It would be a better paper, but apparently he hasn't written or put one out there. Probably a 20-30 minute read instead of 2.5 hours (or 1.25 since I'm running it at double speed).

chapliboy|7 months ago

I thought it was very interesting about how Alan Kay and Bjarne Stroustrup may have been applying wisdom from their old fields of expertise and how that affected their philosophy.

There is an appeal to building complexity through Emergence, where you design several small self-contained pieces that have rich interactions with each other and through those rich interactions you can accomplish more complex things. Its how the universe seems to work. But I also think that the kinds of tools that we have make designing things like this largely impossible. Emergence tends to result in things that we dont expect, and for precise computation and engineering, it feels like we are not close to accomplishing this.

So the idea that we need a sense of 'omniscience' for designing programs on individual systems feels like it is the right way to go.

sebastos|7 months ago

Another angle I was thinking about, re the need for omniscience: Physical systems seem compelled to play by these object oriented rules, where encapsulation is the norm, and information must be transmitted, and locality dominates. But if we are to try to emulate that ethos in our computer programs, one thing the OOP paradigm seems to glaze over is that you aren’t allowed to _only_ write the ‘atoms’ of that universe - we also have to write the ‘laws of physics’ themselves (if you follow the analogy). And what is more global and all-touching than the laws of physics?

So if you look at it through that lens, the need for a little omniscience seems natural. The mistake was in thinking that the program was identified with the objects that the laws govern, when really you have to cover those AND the laws themselves.

Fellshard|7 months ago

The universe may work this way, but we're not God, and modes of computation that work like this still inevitably be impossible for us to predict or comprehend. This may be interesting if you're trying to run simulations (remember the point about SIMULA?) but it's not something you could use to accomplish specific ends, I expect.

crabmusket|7 months ago

I had no idea Thief, one of my favourite games, was built with an ECS-like architecture. Two articles with more interesting details about Thief (I especially love its "temporal" CSG world model):

https://nothings.org/gamedev/thief_rendering.html

https://www.gamedeveloper.com/design/postmortem-i-thief-the-...

I skipped a fair chunk of the middle of this video as I really wanted to get to the Sketchpad discussion, which I found very valuable (starting around 1:10).

I think Casey was fairly balanced, and emphasized near the end of the talk that some of the things under the OOP umbrella aren't necessarily bad, just overused. For example, actors communicating with message passing could be a great way to model distributed systems. Just not, maybe, a game or editor. Along similar lines, I love this old post "reconstructing" OOP ideas with a much simpler take similar to what Casey advocates for:

https://gamedev.net/blogs/entry/2265481-oop-is-dead-long-liv...

But I of course enjoyed him calling out the absolutely dire state of OOP education/tutorials. I satirized this on my own blog ages ago:

https://crabmusket.net/how-i-learned-oop/

In that post I referenced Sandi Metz as an antidote to awful OOP education. I may just have to include Casey as well.

SanJacobs|7 months ago

Waaait, but I thought OOP was carefully crafted to "scale with big teams", and that's why it works so... ahem... "well". Turns out it was just memetic spillover from the creators' previous work?

Jtsummers|7 months ago

And we absolutely needed 30-45 minutes to learn that that wasn't why it was created. The first part is a history of OOP languages to debunk something I'd never heard even claimed until I watched this video. The history was interesting, but also wrong in a few places. It was amusing to hear him talk about Arpanet being used in the 90s, though.

usefulcat|7 months ago

I'm quite interested in this talk. Haven't finished yet, just watched the first part last night.

But I gotta say, I find the graphical background (the blurry text around the edge of the screen that's constantly moving and changing) supremely annoying, not to mention completely unnecessary.

Dear presenters and conference producers: please, please don't do that.

eviks|7 months ago

Yes, visual noise is bad and an attention hijack. Zooming in helps a bit, though precise zooming isn't always available...

hmry|7 months ago

Wow, definitely much more than I expected from the title. Really enjoyed the surprise mini-talk about the origin of entity-component-system in the Q&A section as well.

cma|7 months ago

One thing between 1960s Sutherland and 1990s Looking Glass he left out I think was column-oriented databases (70s and 80s).

That might have been important for the performance aspects that drove the resurgence in ECS, though I know he's focused more in this talk on how ECS also improves the structure for understanding and implementing complex systems: in the 70s and early 80s memory latency probably hadn't begun diverging from instruction rate to such an extreme degree, but in disks it was always a big issue.

Also would like to hear more about Thinglab and if it had some good stuff to it.

anonnon|7 months ago

For those unaware, Casey Muratori started a project called Handmade Hero in 2014 to build a complete game from scratch while livestreaming the entire process, with the goal of showing people not just how, but why, rolling your own engine (hence the "handmade" part) is better than relying on Unity, Unreal, or some other leaky abstraction. He even solicited pre-orders for the finished product, IIRC.

Ten years later, he has no game, only a rudimentary, tile-based dungeon-crawler engine, and reams of code he's written and re-written (as API sands shifted beneath his feet), and the project seems to be permanently on hiatus now. Thus, Casey inadvertently proved himself wrong, and the conventional wisdom (use an existing engine) correct.

As far as OOP goes, 45 years has shown that it makes developers highly productive, and ultimately, as the saying goes, "real heroes (handmade or otherwise) ship." Casey's company was founded 20 years ago, and he's never shipped a software product.

He complains often about software getting slower, which I agree with. Yet how many mainstays of Windows 95/98 desktop software were written in a significantly OO style using C++ with MFC?

jayshua|7 months ago

I think it's important to note a couple of things about this.

First, Casey offers refunds on the handmade website for anyone who purchased the pre-order. Second, the pre-orders were primarily purchased by people who wanted to get the in-progress source code of the project, not people who just wanted to get the finished game. I'm not aware of anyone who purchased the pre-order solely to get the finished game itself. (Though it's certainly possible that there were some people.) Whether that makes a difference is up to the reader I suppose, since the original versions of the site didn't say anything about how likely the project was to finish and did state that the pre-order was for both the source-code and the finished game.

Second, the ten-year timeline (I believe the live streams only spanned 8 years) should be taken with the the note that this is live streaming for just one hour per day on weekdays, or for two hours two or three times a week later in the project. There's roughly 1000 hours of video content not including the Q&As at the end of every video. The 1000 hours includes instructional content and white board explanations in addition to the actual coding which was done while explaining the code itself as it was written. (Also, he wrote literally everything from scratch, something which he stated multiple times probably doesn't make sense in a real project.)

Taking into account the non-coding content, and the slower rate of coding while explaining what is being written, I'd estimate somewhere between 2-4 months of actual (40hr/week) work was completed, which includes both a software and a hardware renderer. No idea how accurate that estimate is, but it's definitely far less than 10 years and doesn't seem very indicative that the coding style he was trying to teach is untenable for game projects. (To be clear, it might be untenable. I don't know. I just don't see how the results of the Handmade Hero project specifically are indicative either way.)

lproven|7 months ago

> As far as OOP goes, 45 years has shown that it makes developers highly productive

[[citation needed]]

As an external industry observer, I've seen many claims, but no actual direct evidence.

harrison_clarke|7 months ago

i think it's kinda funny, because Unity is very clearly inspired by some of casey's work

the big one is immediate mode UIs, which casey popularized back in 2005. Unity's editor uses it to this day, and if you do editor scripting, you'll be using it. for in-game UI, they switched to a component-based one, which also somewhat aligns with casey's opinions. and they shipped DOTS, which aligns even more with what he's saying

i think his lack of shipping is mostly because he switched to teaching and has absolutely no pressure to ship, rather than his approach being bad

krapp|7 months ago

I can see the argument for using a custom engine if you have specific design goals that can't be met by existing engines, but that seems seems like an edge case. I think 99% of game concepts can probably be done in Unity, Godot, or Unreal.

Meanwhile you could probably surpass Handmade Hero with any off the shelf engine with a tutorial and a few hours' work, or even a project template from an asset store. The biggest problem I have with Handmade Hero is that because Casey is putting so much effort into the coding and architecture up front, the game itself isn't interesting. It's supposed to be a "AAA game" but it's little more than a tech demo.

And that's why you use off the shelf engines - they allow you to put effort into designing the game rather than reinventing the wheel.

MintPaw|7 months ago

Great video, I knew like 0.1% of those things before watching.

ozgrakkurt|7 months ago

Well researched, a lot of effort put into it and well presented.

Thought he was just producing filler content on youtube but this really shows how magical it can be to put real effort into something.

romaniv|7 months ago

This video contains many serious misrepresentations. For example, it makes a claim that Alan Kay only started talking about message-passing only in 2003 and that it was a kind of backpedaling due the failures of the inheritance-based OOP model. That is a laughable claim. Kay had given detailed talks discussing issues of OOP, dynamic composition and message-passing in mid-80s. Some of those talks are on YouTube:

https://www.youtube.com/watch?v=QjJaFG63Hlo

Also, earlier versions of Smalltalk did not have inheritance. Kay talks about this is his 1993 article on the history of the language:

https://worrydream.com/EarlyHistoryOfSmalltalk/

Dismissing all of this as insignificant quips is ludicrous.

Mathnerd314|7 months ago

The dates are the dates of the sources, he says in the talk he wasn't going to try to infer the dates these ideas were invented. Also he barely talked about Alan Kay.

Byte2Pixel|7 months ago

What's laughable is not understanding how citations work. The year is not when message-passing was invented or used.

Sirjazzfeet|7 months ago

Great talk, it reminds me when Bob Ross reflected on his career as one big mistake.

swyx|7 months ago

i'm very curious on how this conference came together. how does a small town in sweden by relative unknowns gather big names like this? is it tied to some elite community that only insiders know?

igouy|7 months ago

There is a transcript:

    click Casey's links more

    click Show transcript

lproven|7 months ago

I think you mean the machine-generated one on Youtube? It took me a few minutes of searching from your slightly scant description.

Is there any way to extract just that text into a document I can read?

zozbot234|7 months ago

A "compile-time hierarchy of encapsulation that matches the domain model"? Don't we all call that Typestate these days? Change my mind: Typestate - and Generic Typestate even more so - is just good old OOP wearing a trenchcoat.