Shooting a ray to every vertex and then naively computing intersections takes O(n^2) since there are n rays with up to O(n) intersections each. It is interesting to read and think about O(n log n) algorithms for computing the visibility polygon.
I didn't see this mentioned in those articles, so I figured I'd mention it: If the level has a lot of geometry, you could also make a BSP tree, and then at runtime you'd just have an O(log n) lookup, plus an O(m log m) running of the same algorithm on a pruned subset of m vertices stored in the BSP node. Each BSP node would contain only the edges which can possibly be seen from a particular area. The tree might be kind of big, though.
So it turns out you CAN earn money on non-copyrighted work. $40,000 is not a small amount for most people. Well-deserved for the developer, hats off for the idea, implementation, and Bitcoin as a payment option.
A bit OT but what are some of the most successful open source game titles out there? I'm writing a game server backend right now and didn't find very many projects where I could read good quality game code. (I come from the JS web app world where everything is out in the open so I'm super biased)
Nice article. However I think there's a better algorithm: the one actually used in Doom from Id Software.
First the scene must be put into a 2d bsp (that requires slicing some edge). After that, the bsp must be traversed front-to-back and that gives the edges in the right order. Just keep track of which angles are occluded then while traversing. During the process, when an edge is proccessed, cast/create the shadow accordingly. The algorithm stop when the field of vision is fully occluded - and not farther.
that might work for a frustum like directional light source (maybe) but what about an omnidirectional point light like in the example?
you can certainly walk 'outwards' from a node but then you are missing the real benefit to the bsp for this problem imo... which is that you just carved the world into convex volumes.
convex polygons are easier to work with. they allow applying many more algorithms and make others trivial enough to re-invent on the fly as needed... instead of walking a structure at all you could keep a list of which edges are visible at each leaf node - if you have 'loads' of memory (e.g. > 1MB) then you might be able to store the edges themselves rather than indexes and mitigate memory lookup expenses as well (which are so far away from anything in a broswer... but yeah).
Honestly, I haven't even read it because I'm tired and I'll leave it for tomorrow, but this is the reason I check HN, I've gotten good enough at skimming through and article and the comments to know when an post is worth it and this looks awesome, not much of a point, just happy to see quality material in HN (not a rant, just happy to find a great post).
Really cool tutorial. It also looks like the author has a pledge drive ending today, and he/she is really close to hitting their funding goal. Would be a shame to miss it while being so close, especially for something that looks so promising.
(Note: I have no connection to this developer. Just thought it was worth mentioning.)
It's hardware-accelerated and has a few other useful features; we're using it in Escape Goat 2 and it works on Windows/Linux/Mac on pretty much any D3D9-spec hardware. You can see some of the levels using it here: https://www.youtube.com/watch?v=-9s1PyotS18#t=1m20s
For EG2 we have a pair of separate lighting environments; one for the foreground plane and one for the background plane. Most light sources exist on both planes but have different parameters in order to create a depth effect; some light sources are only on the background plane. We apply a customized ramp texture to the light sources in order to replace the smooth attenuation with quantized attenuation, and we don't use soft shadows (though we do render lighting at a lower resolution and upscale it, because that softens things a little.) We also do a simple HDR camera model by sampling lighting intensity across the scene's foreground/background planes and using that to adjust the range of the final lightmap so that we can automatically tune exposure/white point etc to highlight certain parts of a level.
I also have a similar pure-software-rendering lighting library from around ~8 years ago available here, in C++:
I'll note that the author's approach (needing extra intersection points at +- 0.00001 radian) is a little perplexing. I'm not sure why that is needed; I've never needed it.
A better approach for 'fuzzy' shadows is to model the penumbra; you can render it as a single triangle that borders on the actual shadow umbra and then make it smooth by using a simple lookup texture. This technique is described here:
Another great option for 2D lighting is to associate normal maps with your sprites and do per-pixel lighting against each light source using the normal map. This can produce some really great results. I think that SpriteLamp, http://snakehillgames.com/spritelamp/ does this; the Fury2 library I linked above also did per-pixel sprite lighting with normal maps. The upcoming game Full Bore (http://www.wholehog-games.com/fullbore/) is using per-pixel lighting on its sprites and environment art, and I believe Spelunky for PC/XBLA uses it on environment art.
I'm kinda tired of the "cone-of-vision" mechanic and associated visual effect common in modern 2D games. In traditional 2D games, being able to see things that are not visible to the player's avatar is, IMO, a significant part of the charm. Not being restricted to a realistic viewpoint lets you become the larger-than-life hero of cartoons and cheesy action movies that does impossible things like leaping into a room full of bad guys and shooting them all before they have a chance to react, or "sensing" someone behind him and dodging the arrow at the last second. When you make a 2D game that tries to "realistically" limit the player's view, you're getting the downsides of both 2D and 3D games. It says to me, "I really wanted to make an FPS but I don't know how."
No criticism is intended towards OP's game; the "un-stealth game" is a different and interesting idea. I'm just throwing it out there for those dreaming up yet another top-down cone-of-vision stealth shooter to chew on.
You seem to have an unnecessarily narrow view of games and how they're constructed.
Try out Mark of the Ninja and tell me they were 'just trying to make an FPS'.
Picking an appropriate perspective for a game involves a lot of tradeoffs. While certain mechanics are most appropriate for certain perspectives (top-down 2d, isometric, first-person 3d...) a general approach to 'visibility' is not one of them. Visibility (or an approximation of it) applies to AI in almost every action game type, regardless of perspective. Whether they can see you or not is an essential consideration for creating convincing enemies, and AI allies need to know whether they can see foes.
You can also extensively use vision cone mechanics without having anything to do with an FPS. Classic games of the Rogue variety (i.e. nethack, angband...) all leverage vision cones and line-of-sight despite the fact that they predated 3D games of any kind.
Well, it's not like the unlit parts must be completely black.
Having some dynamic lighting does not mean that you must make it part of your game's mechanics. It's perfectly fine to do that kind of thing just for polish.
this is interesting (although this sort of effect has never mystified me) - i can't help but think how this can be optimised. the steps gone through here are things i never would have done and the final solution is about where i would start...
just blindly tracing to all of the verts 'feels wrong' its the sort of thing that is intuitively unreasonable with large data sets and is quite obviously going to be constrained by available horsepower at some size.
precomputed vis data is the obvious optimisation and removes the O(n) behaviour. not only do you not have to test everypoint but your raycast itself need not test every edge. i wrote something ages ago about generating PVS in 2d as an example of how to approach the problem in 3D...
the other thing that bugs me here is the jitter. it is the classic 'lazy' way to soften shadows... a better effect can be gotten for less - if you offset the light position perpendicular to the direction toward the test vert, in both directions by the same amount then you can generate two ray casts. triangulating the resulting mesh might be a little painful (unless you cut the world into convex chunks - then it is nearly trivial) but you will get the outline for both edges of the shadow and can then do something like vertex interpolated colour to give you the same visual result as infinite raycasts for the price of two...
I'm downvoting your comment because on technical posts the top rated comment is damn near always of zero value and only serves to make the commenter try to sound smarter than the post author. The more in-depth and complicated the post the more those type of comments appear.
This is a nice little post that reasonably describes a technique that has been used by a fair number of professional, commercial products. That anyone would try to downplay it to make themselves feel superior is silly.
To provide some actual value, most of the the images in the post are actually interactive. It's pretty cool. I somehow read the whole thing and didn't notice.
Calculate the edge points of the circle, draw a line between them, and then your shadow is simply a rectangle. Overlay the circle over the shadow to hide the line.
This will work with circles and ovals, but not more complex curves.
dllu|12 years ago
1) a tutorial on the same topic by Red Blob Games: http://www.redblobgames.com/articles/visibility/
2) a public domain Javascript visibility polygon library that runs in O(n log n), which is actually used in the Nothing to Hide game: https://code.google.com/p/visibility-polygon-js/
3) a blog post by author of said library discussing the algorithm: http://byronknoll.blogspot.ca/2013/05/on-log-n.html
Shooting a ray to every vertex and then naively computing intersections takes O(n^2) since there are n rays with up to O(n) intersections each. It is interesting to read and think about O(n log n) algorithms for computing the visibility polygon.
a1k0n|12 years ago
tjaerv|12 years ago
https://back.nothingtohide.cc/
M4v3R|12 years ago
jianshen|12 years ago
TazeTSchnitzel|12 years ago
highCs|12 years ago
First the scene must be put into a 2d bsp (that requires slicing some edge). After that, the bsp must be traversed front-to-back and that gives the edges in the right order. Just keep track of which angles are occluded then while traversing. During the process, when an edge is proccessed, cast/create the shadow accordingly. The algorithm stop when the field of vision is fully occluded - and not farther.
jheriko|12 years ago
you can certainly walk 'outwards' from a node but then you are missing the real benefit to the bsp for this problem imo... which is that you just carved the world into convex volumes.
convex polygons are easier to work with. they allow applying many more algorithms and make others trivial enough to re-invent on the fly as needed... instead of walking a structure at all you could keep a list of which edges are visible at each leaf node - if you have 'loads' of memory (e.g. > 1MB) then you might be able to store the edges themselves rather than indexes and mitigate memory lookup expenses as well (which are so far away from anything in a broswer... but yeah).
shurcooL|12 years ago
I had fun implementing this effect a long time, in a pre-graphics-shader era [1].
[1] https://github.com/shurcooL/eX0#screenshots
Trufa|12 years ago
Osmium|12 years ago
(Note: I have no connection to this developer. Just thought it was worth mentioning.)
DanAndersen|12 years ago
liabru|12 years ago
I prefer using the technique shown in this classic article on 2D soft shadows: http://archive.gamedev.net/archive/reference/programming/fea...
The basic idea is to project the geometry, rather than ray cast. I've implemented this sort of technique in the past and it works well.
kevingadd|12 years ago
https://github.com/sq/Illuminant
It's hardware-accelerated and has a few other useful features; we're using it in Escape Goat 2 and it works on Windows/Linux/Mac on pretty much any D3D9-spec hardware. You can see some of the levels using it here: https://www.youtube.com/watch?v=-9s1PyotS18#t=1m20s
For EG2 we have a pair of separate lighting environments; one for the foreground plane and one for the background plane. Most light sources exist on both planes but have different parameters in order to create a depth effect; some light sources are only on the background plane. We apply a customized ramp texture to the light sources in order to replace the smooth attenuation with quantized attenuation, and we don't use soft shadows (though we do render lighting at a lower resolution and upscale it, because that softens things a little.) We also do a simple HDR camera model by sampling lighting intensity across the scene's foreground/background planes and using that to adjust the range of the final lightmap so that we can automatically tune exposure/white point etc to highlight certain parts of a level.
I also have a similar pure-software-rendering lighting library from around ~8 years ago available here, in C++:
https://github.com/kg/Fury2/blob/master/libraries/SoftFX/mod...
That one's garbage, but may still be of some use.
I'll note that the author's approach (needing extra intersection points at +- 0.00001 radian) is a little perplexing. I'm not sure why that is needed; I've never needed it.
A better approach for 'fuzzy' shadows is to model the penumbra; you can render it as a single triangle that borders on the actual shadow umbra and then make it smooth by using a simple lookup texture. This technique is described here:
http://archive.gamedev.net/archive/reference/programming/fea...
Another great option for 2D lighting is to associate normal maps with your sprites and do per-pixel lighting against each light source using the normal map. This can produce some really great results. I think that SpriteLamp, http://snakehillgames.com/spritelamp/ does this; the Fury2 library I linked above also did per-pixel sprite lighting with normal maps. The upcoming game Full Bore (http://www.wholehog-games.com/fullbore/) is using per-pixel lighting on its sprites and environment art, and I believe Spelunky for PC/XBLA uses it on environment art.
ANTSANTS|12 years ago
No criticism is intended towards OP's game; the "un-stealth game" is a different and interesting idea. I'm just throwing it out there for those dreaming up yet another top-down cone-of-vision stealth shooter to chew on.
kevingadd|12 years ago
Try out Mark of the Ninja and tell me they were 'just trying to make an FPS'.
Picking an appropriate perspective for a game involves a lot of tradeoffs. While certain mechanics are most appropriate for certain perspectives (top-down 2d, isometric, first-person 3d...) a general approach to 'visibility' is not one of them. Visibility (or an approximation of it) applies to AI in almost every action game type, regardless of perspective. Whether they can see you or not is an essential consideration for creating convincing enemies, and AI allies need to know whether they can see foes.
You can also extensively use vision cone mechanics without having anything to do with an FPS. Classic games of the Rogue variety (i.e. nethack, angband...) all leverage vision cones and line-of-sight despite the fact that they predated 3D games of any kind.
ahoge|12 years ago
Having some dynamic lighting does not mean that you must make it part of your game's mechanics. It's perfectly fine to do that kind of thing just for polish.
jheriko|12 years ago
just blindly tracing to all of the verts 'feels wrong' its the sort of thing that is intuitively unreasonable with large data sets and is quite obviously going to be constrained by available horsepower at some size.
precomputed vis data is the obvious optimisation and removes the O(n) behaviour. not only do you not have to test everypoint but your raycast itself need not test every edge. i wrote something ages ago about generating PVS in 2d as an example of how to approach the problem in 3D...
(http://jheriko-rtw.blogspot.co.uk/2011/05/visibility-determi...)
the other thing that bugs me here is the jitter. it is the classic 'lazy' way to soften shadows... a better effect can be gotten for less - if you offset the light position perpendicular to the direction toward the test vert, in both directions by the same amount then you can generate two ray casts. triangulating the resulting mesh might be a little painful (unless you cut the world into convex chunks - then it is nearly trivial) but you will get the outline for both edges of the shadow and can then do something like vertex interpolated colour to give you the same visual result as infinite raycasts for the price of two...
thewarrior|12 years ago
TophWells|12 years ago
MasterScrat|12 years ago
http://bit.ly/LZ2dq1
supercoder|12 years ago
unknown|12 years ago
[deleted]
wudf|12 years ago
shultays|12 years ago
Mithaldu|12 years ago
forrestthewoods|12 years ago
This is a nice little post that reasonably describes a technique that has been used by a fair number of professional, commercial products. That anyone would try to downplay it to make themselves feel superior is silly.
To provide some actual value, most of the the images in the post are actually interactive. It's pretty cool. I somehow read the whole thing and didn't notice.
User8712|12 years ago
http://i.imgur.com/G5U43ZU.png
Calculate the edge points of the circle, draw a line between them, and then your shadow is simply a rectangle. Overlay the circle over the shadow to hide the line.
This will work with circles and ovals, but not more complex curves.
dllu|12 years ago