top | item 2835383

Games company claims their graphics are 100,000x better

385 points| trog | 14 years ago |ausgamers.com | reply

139 comments

order
[+] coffeemug|14 years ago|reply
I think what they're doing is great, but I see two problems with their presentation. First, computer rendering techniques are extremely well understood and well researched. We've picked the low hanging fruit, much of the high hanging fruit, and everything in between. There is no "groundbreaking new technology" to be invented. They're converting polygons into voxels (although each voxel is probably a sphere for cheaper computation), and using software ray-tracing to render in real time. Since ray-tracing is trivially parallelizable, the multicore technology is just about there now. A 12-core machine will give just about 20FPS. The reason why they can get away with an incredible amount of detail is that ray-tracing diffuse objects is fairly independent of the number of visible polygons in the scene.

The second problem is that 10^4x improvement in level of detail does not mean 10^4x aesthetically pleasing (or in fact, more aesthetically pleasing at all). Ray tracing gets very expensive the moment you start adding multiple lights, specular materials, partially translucent materials, etc. It is very, very difficult to do that in real-time even with standard geometry, let alone with 10^4x more polygons. This is why their level doesn't look nearly as good as modern games despite higher polygon count (compare it to the unreal demo: http://www.youtube.com/watch?v=ttx959sUORY) They only use diffuse lighting and few lights. In terms of aesthetic appeal of a rendered image, lighting and textures are everything.

Furthermore, one of the biggest impacts on how aesthetically pleasing a rendered images looks is made by global illumination. That's also something that's extremely difficult to do in real time with raytracing, but is possible with gpu hardware with tricks. The trouble is, these tricks look much better than raw polygons.

Again, I love what they're doing. Real-time ray-tracing is without a doubt the future of graphics, but it would be nice if they were a little less sensational about the technology, and more open about the limitations and open issues.

[+] cgart|14 years ago|reply
This video is not that fresh. I have seen it already last year and the way how this guys speaks has not changed since then (speaks like a salesmen) ;)

Their technique is based on point cloud rendering. My supervisor has proposed already in 2004 how this can be done on a standard PC. Look at his PhD-Thesis [1].

This technique is usable for static objects as well as for dynamic objects; however, using it for dynamic scenes additional acceleration structures are required (besides of the Octrees) in order to dynamically change according to the deformation of the object. To clarify on several comments made here:

- this is a rasterization technique

- they used acceleration data-structures like Octrees, kd-trees, BVH, ... (which exactly they don't tell)

- the graphic "aesthetics" depends actually only on the artists and not on the technology itself, so not a good point

[1] M. Wand: Point-Based Multi-Resolution Rendering. PhD Thesis, Wilhelm Schickard Institute for Computer Science, Graphical-Interactive Systems (WSI/GRIS), University of Tübingen, 2004.

[+] tintin|14 years ago|reply
And there are no moving objects in this demo. Voxel animations are much harder than poly animations.

Then there is memory. The elephant is looking great, no doubt about that. But I think you will need a lot of space for it. On a PC this might work, but I'm not sure it can be used on consoles.

[+] speleding|14 years ago|reply
> They're converting polygons into voxels

Are you sure about that? He mentions atoms, but then the video also mentions that they're using procedurally generated graphics, which is something entirely different. Also, voxels would not work on the scale they are demonstrating, you'd need way too much memory.

[+] dkersten|14 years ago|reply
In terms of aesthetic appeal of a rendered image, lighting and textures are everything.

A thousand times this!

[+] prawn|14 years ago|reply
John Carmack's repsonse:

"Re Euclideon, no chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging."

https://twitter.com/#!/ID_AA_Carmack/statuses/98127398683422...

[+] jarin|14 years ago|reply
I trust Carmack's response over pretty much anyone's in matters of rendering engines. The guy is a god of coding them.
[+] 6ren|14 years ago|reply
His latest tweet on it is interesting:

> @Foggen insufficient information to say if it is tracing or splatting.

They state their method is based on a well-known technique used in engineering and medical visualization, and splatting seems to be used there. I'm still not 100% clear on what splatting is, but there's a bunch of papers applying that term to voxels, e.g. http://graphics.cs.cmu.edu/projects/objewa/

[+] Tomek_|14 years ago|reply
I remember reading years ago that id was considering using vortex graphics for Quake2 and Carmack had even started to work on it, but they abandoned the idea seeing that industry is heading into "more polygons!" thing (the context: it was a time when 3dfx released Voodoo2, nVidia was starting to gain popularity (Riva TNT2) and first Unreal was on a horizon).
[+] chipsy|14 years ago|reply
By "production issues" I would think "pipeline" first of all. It doesn't matter if it's possible to convert a high-res mesh to an optimized voxel format, if the result takes hours of compilation. That is too long an iteration time for a single asset.
[+] saulrh|14 years ago|reply
I'll say the same thing now as I said last time these guys released a video: I'll believe it when I see them make a single blade of grass move, or when they place a single dynamic light source and cast a single dynamic shadow. Until then, this technology is awesome, but more or less useless.
[+] nosignal|14 years ago|reply
Even if it's absolutely impossible to animate this technology I don't think it would be "useless". I don't see why it wouldn't be possible to have the static elements (buildings, tree trunks, ground, debris, etc etc) of a level rendered using this tech, and polygons used for everything else. It used to be this way in the bad old days (Doom etc) - Polygons for level structure and sprites for animation. This would at least free up the "polygon budget" to improve the features which couldn't be voxel-based.

If anything you could create a pretty interesting art style with hyper-realistic backgrounds and cel-shaded characters, or similar. Even as a replacement for pre-rendered background scenery or out-of-level elements it would be useful.

And (thinking of the Game of Thrones titles story) whatever mojo they've got must surely be able to be applied to other industries, eg. CGI - if Weta or Pixar could improve the poly count of their static backdrops without just buying more rendering farms, that's gotta count for something.

[+] JeanPierre|14 years ago|reply
What is somewhat disappointing is that they don't tell if they have managed to solve these issues or not, as they were clearly commented on one year ago. Another major issue is the RAM-usage, as point cloud data will require a lot of memory.

Their claims of being 100,000 times better than current technology is also "frustrating", as it is repeatedly mentioned as if you've for some strange reason forgot it in the last 30 seconds. It is also a lie if they cannot use this in a game: I would suspect it's not impossible to increase the level of polygons to 10-100 times the amount - if not even more - if there's no animations or dynamic light sources around.

[+] palish|14 years ago|reply
Dynamic shadows work in exactly the same way as they do for standard triangle rasterization.

Each light has a shadow map (which can be thought of as "the depth of the scene, from the point of view of the light").

During the final rasterization pass, the shadow map is sampled. If the sampled depth is less than or equal to the current fragment's depth, then the fragment is in light; otherwise it's in shadow.

So, nothing has fundamentally changed here. When a particle (voxel) is rasterized, it outputs a depth, just like a triangle outputs a depth.

tl;dr: Dynamic shadows work fine.

[+] hristov|14 years ago|reply
Also if they actually make a scene with different things in it. Having millions of objects in your scene is not that hard if they are all copies of the same object. The old demonstration showed a bunch of repeated copies of a single object. This one showed a lot of copies as well, although not as obvious.
[+] overshard|14 years ago|reply
It's all fun and games until you introduce animation... oh wait, that is fun and games... I just confused myself.
[+] extension|14 years ago|reply
Useless huh? Static geometry and lighting was the state of the art for a while, starting with Quake. It certainly puts some constraints on game mechanics but it's significantly better than useless.
[+] gavanwoolery|14 years ago|reply
As someone who has worked with GPUs and software renderers for over a decade:

I am pretty sure that their tech depends on a few types of repeatable data, which they are able to cache effectively based on rotation -- in other words, they have come up with an efficient way of querying the front-facing voxels in a large set of data based on the resulting view matrix. Where this falls flat is if the data is not procedural, or it is not diverse - as you can see in the video, there is a bunch of the same data copied over and over. However you compress it, such detail is not free and I am guessing there is a lot of data that depends on a good deal of memory/storage to work properly.

I am not so worried about animation or dynamic lights or textures as everyone else is. If they can render it to a buffer and get the normals/depth/UV coordinates, the rest of the rendering can be done in screen-space, including SSAO, deferred lighting, and similar rasterization tricks. Animation can also be rendered on top of the scene, and intersected with the former depth buffer. The only thing I am worried about is the size of the data set and ability to create more diverse landscapes.

[+] gavanwoolery|14 years ago|reply
Also, for those interested in seeing a LIVE demo of very similar technology, there are many examples, but here is one:

http://voxels.blogspot.com

It is not likely to be the exact the same technique, but I am guessing they are using similar methods.

[+] greendestiny|14 years ago|reply
It's no accident that there is that much repetition in the models. It's also no accident that they are all nicely tiled in power of two axis aligned bounding boxes. Clearly these things take up enormous amounts of memory and need to be in some big octtree like hierarchy - so while they can instantiate these pretty impressive leaf nodes they can't do things like have them on uneven ground.

So much work left to do.

[+] vrode|14 years ago|reply
I'm sorry, but as much as I respect people both on Reddit and Hacker News, I wonder where does all the enthusiasm come from, when:

* demos show nothing new from a technological perspective

* the presenter sounds like a door-to-door salesperson

* as it seems to me, the only purpose of the demo is to raise a hype and somehow (I still don't understand) they succeeded

Euclideon got financed by Australian government.

I really hope the board took a critical approach and relied on at least /some/ technical expertise to grant these people A$ 2m. If they made this decision based just on a demo - I'm moving to Australia at once, where I will invent a technology you have never seen in your whole life before. Ever.

[+] Maxious|14 years ago|reply
The startup grant program was this one: http://www.commercialisationaustralia.gov.au/WhatWeOffer/Ear...

Terms in that program are you have to match the government funding 1:1 so they must have (or raise within 2 years) $2mil to put against the government's. They also have to pay it back "on success" (5% of revenue once that reaches 100k total) and are monitored closely for "fast failure" (they are expected to succeed and repay within 2 years, they have to repay even if they fail after 5 years). Euclideon claims to have had a 2010 funding round so maybe that's how they got into this program. It also says this program is not to be used to "Prove to the applicant that a certain technological problem can be overcome (R&D projects)" so they must have shown it as a viable product that just needs to be packaged up for sale.

What strikes me most is anything under a Commonwealth funding agreement has to have the words "Funded by Australian Government through the XYZ Program. An Australian Government Initiative" in all their promotional material. Yet the shining star of Commercialisation Australia's portfolio forgot. Ouch.

[+] mambodog|14 years ago|reply
Every time this comes up I like to point people to the Atomontage Engine[1], which takes (what I think is) a more pragmatic approach, combining voxel and polygon graphics. Voxels are used where appropriate (eg. landscape, destructible buildings) and polygons can be used for dynamic objects.

[1] http://www.youtube.com/watch?v=1sfWYUgxGBE

[+] yason|14 years ago|reply
Anytime there's too much bragging--or any bragging at all--before the actual product is finished, my bogus filter lights up. And it's really hard to turn it off later.
[+] AlfaWolph|14 years ago|reply
I thought it was interesting that they predicted some sort forthcoming schism between 'real' scene objects and 'artificial' ones. At first I thought he was talking about the characters and scenes of residing on one side of the uncanny valley or the other, which I think is a valid thought. But he wasn't talking about that at all and posited instead that objects will either be scanned in from the real world and placed in game vs assets created by artists. I don't think this will be the case except in games that strive for realism. It will be more like Photoshop, where real scanned in assets still require artists to perfect and stylize them for your game. Until high resolution, tactile VR arrives, you're still running into 'Ceci n'est pas une pipe'.
[+] dkersten|14 years ago|reply
IMHO polygon count is not nearly as important as texture, lighting (and therefore also shadows) and animation quality.

Their polygon count is impressive and the object detail looks awesome, but, as other people commented here, I wonder how well it will hold up when dynamic objects, animation and dynamic lighting are added.

[+] chime|14 years ago|reply
I can understand that adding dynamic objects/shadows is difficult and their videos do not show them being capable of doing that yet. However, why can't 90% of the objects be rendered in the new static way like they do (statues, buildings, tree trunks) and dynamic objects be added on top of it using whatever method game devs use right now? I don't really care if the cactus is moving or not but I sure would like to see it in much higher detail.

Why can't we take the good from both and get better results?

[+] llambda|14 years ago|reply
This has been "announced" since 2010 all the while being only "a few months" from release. So far nothing has materialized. Vaporware. Also note this is nothing but voxels plus an advanced search algorithm, for resource conservation.
[+] jxcole|14 years ago|reply
The most unrealistic things in video games for me are faces. While increasing polygon counts and such will certainly help, I can't help but notice that faces will never cross the uncanny valley unless they can do something about the lighting.

Check out:

http://graphics.ucsd.edu/~henrik/images/subsurf.html

So, unless they can do all this AND ray trace it at the same time, it really won't make my game experience 100,000 times better.

[+] mhd|14 years ago|reply
Faces do get better, and I'd be pretty content with something like Mass Effect 1 (not exactly state of the art anymore), unless we're going for very emotional scenes in close-ups -- where most real-life actors fail, too.

One problem where in my opinion the advances aren't exactly exponential is body language. If I'm looking at the NPC talking to me, it's not just his facial motions, teeth etc. that I'm aware of, it's also the movements of his body - shoulders, arms, etc. This is still pretty bad. Most of the time they're just flailing around in a pretty uncoordinated manner. It gets better for highly "scripted" scenes, but the usual "shrug / fist pump / scratch yourself" animations are bad. They've gotten pretty good at the purely physical parts, i.e. what muscles and body parts have to move if something connected to it moves (skeletal and muscular animation), but there needs to be a better "body language AI".

[+] baddox|14 years ago|reply
These days, I don't think rendering technology is what's holding us back from having realistic faces. I think the art and animation is just really hard to get right.
[+] starwed|14 years ago|reply
Surely a lot of that is due, not to lighting/shadows/rendering, but to the incredible subtlety required of the animation?

HL2 has some pretty amazingly convincing faces, and that's obviously not because it has the best rendering engine.

[+] Havoc|14 years ago|reply
I love how they mock other game dev companies for using skyboxes (they call it card board cut-out buildings in the distance @04m52) and then proceed to do exactly the same thing in their demo. Hell they even managed to do it wrong (their sky texture isn't stretched & compressed appropriately to hide the fact that its a cube @2m27).

To many bold claims & deception in that vid. Colour me skeptical.

[+] goalieca|14 years ago|reply
Polygon engines work nice with physics and animation. I'm trying to figure out how they could have a dynamic world based on particles.
[+] alexscheelmeyer|14 years ago|reply
Regarding the technology used. In this video: http://www.youtube.com/watch?v=JWujsO2V2IA you can see lots of artifacts and also talk of point cloud data, so it is clearly not raytracing but rather point data rendering. All the repetition seen is because of the memory constraints. The point data is probably preprocessed and compressed in numerous ways, which makes it very difficult to do animations. But as others have mentioned, even as a last resort they should be able to just use this technology to render terrain/background and then use polygons for moving/animated objects. This would probably also utilized current technology better as the polygon pipeline would not just sit there unused.
[+] hartror|14 years ago|reply
Still no dynamic objects in these videos. A limitation they're not discussing and working on?