2D games had so many tricks that they often looked better than their 3D replacements for quite some time. If you were able to do every pixel by hand for a few views you often ended up with more detail than you could in the low amount of polygons available early on.
Games in general suffer from the fact that more technology does not necessarily lead to better visual - or narrative - outcomes. There was a period of time where every game engine producer was very excited about their engine's ability to render hair and I dunno if there are many games where the quality of the hair rendering has been a factor in the graphics. "More realism" runs up against the hard truth that these games are being projected on a 2D screen and can't be realistic. And even if it was, there still needs to be a justification for why that is better - it isn't self evident that more realistic graphics is a batter outcome. The real world is pretty boring, that is why people are playing games.
Hand-painted/drawn backgrounds were unfairly good back in the day. Something like Planescape: Torment looks better than its age because there’s really very little to render in real-time. Conversely, something comparable like SimGolf looks much worse despite coming out later because it couldn’t leverage anything pre-rendered.
Playing through Final Fantasy VII as a kid (albeit emulated) was a great introduction to that dichotomy. The game mixes both 2D and 3D art styles to varying degrees of success, but there's definitely a reason why the game's art direction is so iconic. Looking back on it, all I see is a watercolor, handpainted haze (as opposed to games from the system that haven't quite aged as well, like Metal Gear Solid)
One of the worst examples of a 2d to 3d transition is Europe Universalis 2 and 3. The 3d map got all messy and hard to see stuff on and it lagged alot even on fast computers.
I tried to D2 open beta and surely the monster stood out way less. If 3d would be as clear each monster would have to have a thick 2d border.
On the other hand, the main thing I notice in the first Diablo 2 shots (having had my attention drawn to the rendering -- I'm not sure what the article means by "pay attention, how the pales (?) don't cover the same floor-pixels all the time") is how the torch appears to be casting shadows perpendicular to itself. And how as you walk by the torch, the direction of your shadow doesn't change at all.
"Fake perspective" was problematic in early flight simulators as well where texture-mapping was used. The classic example being a long runway.
If the runway was a long polygon and you had a long texture map (with a dashed centerline for example) the standard code to map the bitmap to the projected runway polygon would not in fact apply perspective to the bitmap — the intervals between the dashes would be a fixed not perspective distance.
The solution was to break up the runway into a whole bunch of shorter polygons — each with a bitmap of perhaps one stripe/dash.
Nowdays, I make the distinction between maintainable programs and hackable ones, the later of course being, imo, way better. What I mean by that is essentially that the ratio # by-design features over simplicity of code of the hackable program is considerably higher than the maintainable program (but there is no method to write a hackable program, you either have the sensibility for that or you dont, from my own observation - I'll be happy to discuss the nuances of that statement).
I don't know today but blizzard used to write its game in the hackable way - which is both a feat and rare. Take WoW for example, do you know how the 3d engine basically work? It's so simple and so powerful, you are not going to believe it. WoW is made of levels that are connected by portals. That's it. A level will have portals that will get you to other levels which in turn will have portals which lead to other levels. For example, when you are "outdoor" (with quotes because the engine doesn't know outdoor actually, it knows levels!), you may find a portal that leads you into a tavern for example. Once you are in the tavern, the program can ignore the rest of the world and focus the whole computer resources on the tavern, just like that, by-design. So you'll have taverns as detailed as all of what you can see in the exteriors. Better, you can have landscape in each and every level. (note when the player is in the air, say on a gryphon, some portals are ignored.)
This is both incredible simple and powerful. Remember the game run on early-2000 PCs. Imagine what you can do with that. Caves? Done. Houses that leads into catacombs? Done. Incredibly detailed cities? Add walls and portals, done. Texture streaming? Well, d-o-n-e. Buildings that leads to indoor landscape? Done.
I wouldnt make a game engine differently today.
As a comparison, I'm working on one of the most played game in the world atm which is a maintainable program. You have no idea the pain in the ass in comparison just to add simple features. Oh yeah the code is readable. You can fix bugs. But features, just forget it, it takes a shit load of money to get one done.
If I may ask, I would love to hear your perspective about the fuzzy, squishy reasons why hackable turns into maintainable from a management perspective.
I think I understand what you mean about the two. I know embarassingly little about electronics despite being interested in it for a long time, and have boggled at how much a circuit schematic can differ from PCB layout. On a schematic you can just go "and here are 48 data lines and we'll just draw some dots to illustrate that they yeet all the way over there" but on a PCB you not only have to route them (possibly across multiple PCB layers) but maybe some of the lines have to have squiggly messes in them so they're all exactly the same length and the electrical impulses arrive at the correct nanosecond. The good electronic engineers are the ones who've learned to fluently translate between schematic and PCB layout in their heads, in exactly the same way someone might fluently translate between languages to the point that they forget which language they heard something in, their memory just encodes the meaning directly because the neural routing is that deep/strong/integrated.
A good programmer has learned to understand/accept/integrate(/resign themselves to) the fact that the "focal point" of almost every program is in the programmer's head, not on the screen, and that whatever's onscreen is instead ultimately just a giant pile of little pinball-paddle nudges that hopefully prompt the reader to go "OH" and have the program structure "pop" into focus. Once you have that, you're good to go: that distinct mental model is detached from this class structure or that naming convention or whatever pattern, and thus not bound by the mnemonic/interpretative/structural limitations of the proverbial puzzle pieces that constitute the methodology; it's the superset of all of that and all the machinery that hasn't been set into stone yet.
It is so, so weird how programming slams right up against the edge of design/engineering/bigger-picture-greater-than-the-sum-of-the-parts vs Management™.
I remember the D2 perspective mode was a neat trick but it made the game kinda fuzzy and broke core graphics features such as my character wouldn’t be green when poisoned.
The green was a palette trick. The Direct3D mode used high-color mode, so that trick wasn’t free. It did draw a green light halo around the character’s feet instead, though.
"Moon Patrol" seemed like such an entertaining game at the time. But it grew stale very quickly. The parallax effect was really popular in home consoles like NES games just a few years later.
This article is all about rendering but gameplay is what makes "Don't Starve" standout IMO.
I really love the idea of Don’t Starve, but I felt like I had to read a guide to get better at it, so I lost interest. I adore the artwork, especially.
At the same time, the main game loop of Don’t Starve seems like you have to memorize a bunch of survival tips and constantly read the wiki to survive, or you’re doomed after the first week for sure, and probably before.
Ah yes, such great tricks to fool the eye. Like programming tricks to fit into small memory footprints I suspect the "how" of these things will get lost over time.
Contemporary with these game developments was an excellent series called "Graphics GEMS" which highlighted different ways to achieve various cool effects. Back in those days I felt a real sense of accomplishment when I got something to both work and "look right" (your eye is a very harsh critic!) and its a bit sad for me that you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.
> its a bit sad for me that you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.
You are free to not call those APIs. You could constrain yourself. I did this and I am having way more fun than if I tried to use the full capabilities of a modern GPU+software stack.
> you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.
Nitpick:
Vulkan (with a K) is a low-level graphics API, operating on a level that's comparable to OpenGL or DirectX, or perhaps even a bit lower. It's not going to implement any kind of interesting, nontrivial graphical effects for you -- doing that is going to take work, just like it would with older graphics APIs.
With a higher-level framework like Unity, on the other hand... that's a fair comment.
Yeah it does feel like this kind of clever graphics trickery is becoming a lost artform. Of course, niche circles can continue it. Retro console/PC programming circles, demoscene, etc. But I'm not sure how long those circles will last once the generations that grew up with that hardware die out.
Fake 3D racing games with did it worse, as you just had a trig shifting road and some background scrolling right and left. When I played Road Rash 3D on the PSX the change was like night and day.
Altough some of them did it really well, like Lotus Turbo Challenge I and II for the Mega Drive.
I love You mentioning moon patrol. It was one of my first games back in the days on Atari 1024St. Spend hours in front of it, just a me a friend and a joystick...
[+] [-] bombcar|4 years ago|reply
[+] [-] roenxi|4 years ago|reply
[+] [-] jimbob45|4 years ago|reply
[+] [-] smoldesu|4 years ago|reply
[+] [-] rightbyte|4 years ago|reply
I tried to D2 open beta and surely the monster stood out way less. If 3d would be as clear each monster would have to have a thick 2d border.
[+] [-] thaumasiotes|4 years ago|reply
[+] [-] JKCalhoun|4 years ago|reply
If the runway was a long polygon and you had a long texture map (with a dashed centerline for example) the standard code to map the bitmap to the projected runway polygon would not in fact apply perspective to the bitmap — the intervals between the dashes would be a fixed not perspective distance.
The solution was to break up the runway into a whole bunch of shorter polygons — each with a bitmap of perhaps one stripe/dash.
[+] [-] aaaaaaaaaaab|4 years ago|reply
[+] [-] quadcore|4 years ago|reply
I don't know today but blizzard used to write its game in the hackable way - which is both a feat and rare. Take WoW for example, do you know how the 3d engine basically work? It's so simple and so powerful, you are not going to believe it. WoW is made of levels that are connected by portals. That's it. A level will have portals that will get you to other levels which in turn will have portals which lead to other levels. For example, when you are "outdoor" (with quotes because the engine doesn't know outdoor actually, it knows levels!), you may find a portal that leads you into a tavern for example. Once you are in the tavern, the program can ignore the rest of the world and focus the whole computer resources on the tavern, just like that, by-design. So you'll have taverns as detailed as all of what you can see in the exteriors. Better, you can have landscape in each and every level. (note when the player is in the air, say on a gryphon, some portals are ignored.)
This is both incredible simple and powerful. Remember the game run on early-2000 PCs. Imagine what you can do with that. Caves? Done. Houses that leads into catacombs? Done. Incredibly detailed cities? Add walls and portals, done. Texture streaming? Well, d-o-n-e. Buildings that leads to indoor landscape? Done.
I wouldnt make a game engine differently today.
As a comparison, I'm working on one of the most played game in the world atm which is a maintainable program. You have no idea the pain in the ass in comparison just to add simple features. Oh yeah the code is readable. You can fix bugs. But features, just forget it, it takes a shit load of money to get one done.
[+] [-] exikyut|4 years ago|reply
I think I understand what you mean about the two. I know embarassingly little about electronics despite being interested in it for a long time, and have boggled at how much a circuit schematic can differ from PCB layout. On a schematic you can just go "and here are 48 data lines and we'll just draw some dots to illustrate that they yeet all the way over there" but on a PCB you not only have to route them (possibly across multiple PCB layers) but maybe some of the lines have to have squiggly messes in them so they're all exactly the same length and the electrical impulses arrive at the correct nanosecond. The good electronic engineers are the ones who've learned to fluently translate between schematic and PCB layout in their heads, in exactly the same way someone might fluently translate between languages to the point that they forget which language they heard something in, their memory just encodes the meaning directly because the neural routing is that deep/strong/integrated.
A good programmer has learned to understand/accept/integrate(/resign themselves to) the fact that the "focal point" of almost every program is in the programmer's head, not on the screen, and that whatever's onscreen is instead ultimately just a giant pile of little pinball-paddle nudges that hopefully prompt the reader to go "OH" and have the program structure "pop" into focus. Once you have that, you're good to go: that distinct mental model is detached from this class structure or that naming convention or whatever pattern, and thus not bound by the mnemonic/interpretative/structural limitations of the proverbial puzzle pieces that constitute the methodology; it's the superset of all of that and all the machinery that hasn't been set into stone yet.
It is so, so weird how programming slams right up against the edge of design/engineering/bigger-picture-greater-than-the-sum-of-the-parts vs Management™.
[+] [-] Waterluvian|4 years ago|reply
[+] [-] BearOso|4 years ago|reply
[+] [-] Tarsul|4 years ago|reply
[+] [-] wyldfire|4 years ago|reply
This article is all about rendering but gameplay is what makes "Don't Starve" standout IMO.
[+] [-] wincy|4 years ago|reply
At the same time, the main game loop of Don’t Starve seems like you have to memorize a bunch of survival tips and constantly read the wiki to survive, or you’re doomed after the first week for sure, and probably before.
[+] [-] ChuckMcM|4 years ago|reply
Contemporary with these game developments was an excellent series called "Graphics GEMS" which highlighted different ways to achieve various cool effects. Back in those days I felt a real sense of accomplishment when I got something to both work and "look right" (your eye is a very harsh critic!) and its a bit sad for me that you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.
[+] [-] bob1029|4 years ago|reply
You are free to not call those APIs. You could constrain yourself. I did this and I am having way more fun than if I tried to use the full capabilities of a modern GPU+software stack.
[+] [-] duskwuff|4 years ago|reply
Nitpick:
Vulkan (with a K) is a low-level graphics API, operating on a level that's comparable to OpenGL or DirectX, or perhaps even a bit lower. It's not going to implement any kind of interesting, nontrivial graphical effects for you -- doing that is going to take work, just like it would with older graphics APIs.
With a higher-level framework like Unity, on the other hand... that's a fair comment.
[+] [-] resonious|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] anthk|4 years ago|reply
Altough some of them did it really well, like Lotus Turbo Challenge I and II for the Mega Drive.
[+] [-] slmjkdbtl|4 years ago|reply
[+] [-] mbank|4 years ago|reply
[+] [-] lobocinza|4 years ago|reply
[+] [-] WalterSobchak|4 years ago|reply
https://simonschreibt.de/gat/dont-starve-diablo-parallax-7/
[+] [-] nextaccountic|4 years ago|reply
Relevant thread https://news.ycombinator.com/item?id=13301280 (but from 2017, can't find a 2021 discussion)