A* is less useful when you're not omniscient, that is, testing if a cell is blocked has a sensing cost. I ran into this in a game application. To find out if a cell is obstructed, I have to do a ray cast at a few points in the cell, which uses resources. A* requires sensing a large number of cells to collect non-useful data, and if you have a big, mostly open space with some obstacles, like the real world, it does far too much sensing.
So I ended up with a variant on Pledge's approach to wall-following. Head toward the goal until an obstacle is detected. Then, start wall-following, but simultaneously in both left and right directions. When one of the wall-follower tests can head towards the goal, do that, and kill off the other wall-follower. So you alternate between heading towards the goal in open space, cheaply, and wall following.
Searching both left and right simultaneously avoids taking the long way round some obstacles.
I'm a bit confused by your comment, when you say that "testing if a cell is blocked has a sensing cost", do you mean the obstacle/wall is not in your graph at the point of search? A properly calculated navmesh by definition has the obstacle/wall 'cut out' of the mesh (or otherwise represented in a modifier area). Perhaps I've misunderstood you but it sounds like you're trying to build, at least partially, the navmesh as you search?
If you're rather talking about temporary obstacles like other nav agents that need to be avoided, there are a number of approaches to agent avoidance that work nicely on a subset of a navgraph.
This reminds me of what the Death Stranding team presented at GDC regarding the pathfinding. They had LOTS of new issues for a game due to the unique nature of Death Stranding (obstacles, tripping, balancing) and they do an excellent job at outlining their approach.
I've been working on a video game and actually followed the article to implement astar.
If you have an inaccessible node, astar will indeed scan everything. But to get around this I only had to add a limit to the number of frontier iterations which was just a conditional.
I used to be obsessed with the programming game Screeps and read most of these blog posts when they were still published on the authors stanford pages- there's a lot of great stuff in there.
Screeps is fascinating but I never got motivated to play it. :(
I use the Stanford pages [1] to link to interesting papers and I use Red Blob Games to explore interactive ways of presenting topics. The most recent update to the Stanford pages is from 3 weeks ago, about any-angle pathfinding [2]. But most of what I do these days is on the Red Blob Games site. I probably would've kept using the Stanford pages but they have a 100MB quota limit and I was running out of space…
I have bookmarked many Red Blob Games posts like this one, both because of their excellent content, but also as examples of how to write truly great tutorials. Well organized content, good CSS without over-styling, advanced JavaScript but only used in the exact right places: interactive demos, toggles to customize the content more to your use case (e.g. hex vs. square), and not to hijack my scroll bar or for unnecessary flashiness. This site is my go-to for inspiration on how to write a fantastic tutorial.
A few years ago, I wondered whether it was possible to extend A* to pathfinding with momentum. It would require a consistent and admissible heuristic for Newtonian kinematics. It was a little tricky to find but it turns out that it does exist!
I've come across this link before, and it's actually really useful. I used it to help me implement A* myself for a project I was working on, which worked decently for my rather simple use-case.
Amit, your amazing man. I've come across this doing Advent of Code this year. You're the only reason I no longer fear DSA as a self taught programmer. If you're ever in Bangalore I owe you a drink.
[+] [-] sebg|4 years ago|reply
Introduction to the a* Algorithm - https://news.ycombinator.com/item?id=24146045 - August 2020 (1 comment)
Introduction to A* (2014) - https://news.ycombinator.com/item?id=18642462 - December 2018 (14 comments)
Introduction to A* - https://news.ycombinator.com/item?id=16190604 - January 2018 (0 comments)
Introduction to A* algorithm - https://news.ycombinator.com/item?id=10724098 - December 2015 (1 comment)
Introduction to A* - https://news.ycombinator.com/item?id=8059237 - July 2014 (28 comments)
Related threads:
Making of “Introduction to A*” - https://news.ycombinator.com/item?id=8445732 - October 2014 (12 comments)
[+] [-] slingnow|4 years ago|reply
[+] [-] Animats|4 years ago|reply
So I ended up with a variant on Pledge's approach to wall-following. Head toward the goal until an obstacle is detected. Then, start wall-following, but simultaneously in both left and right directions. When one of the wall-follower tests can head towards the goal, do that, and kill off the other wall-follower. So you alternate between heading towards the goal in open space, cheaply, and wall following.
Searching both left and right simultaneously avoids taking the long way round some obstacles.
[+] [-] DizzyDoo|4 years ago|reply
If you're rather talking about temporary obstacles like other nav agents that need to be avoided, there are a number of approaches to agent avoidance that work nicely on a subset of a navgraph.
[+] [-] 10000truths|4 years ago|reply
[+] [-] garaetjjte|4 years ago|reply
[+] [-] cmehdy|4 years ago|reply
Video (51 mins): https://www.youtube.com/watch?v=yqZE5O8VPAU
[+] [-] slingnow|4 years ago|reply
[+] [-] viseztrance|4 years ago|reply
If you have an inaccessible node, astar will indeed scan everything. But to get around this I only had to add a limit to the number of frontier iterations which was just a conditional.
[+] [-] heavenlyblue|4 years ago|reply
- build a graph of connectivity of all areas of the map OFFLINE
- make sure that graph also has information about disconnected components and never apply A* to points which are disconnected from each other
Do A* on that graph.
[+] [-] adamc|4 years ago|reply
[+] [-] tedivm|4 years ago|reply
[+] [-] amitp|4 years ago|reply
I use the Stanford pages [1] to link to interesting papers and I use Red Blob Games to explore interactive ways of presenting topics. The most recent update to the Stanford pages is from 3 weeks ago, about any-angle pathfinding [2]. But most of what I do these days is on the Red Blob Games site. I probably would've kept using the Stanford pages but they have a 100MB quota limit and I was running out of space…
[1] http://www-cs-students.stanford.edu/~amitp/gameprog.html may be the oldest surviving game development website, as I started it in either 1994 or 1995. Older than Google or Wikipedia or even Slashdot. [2] http://theory.stanford.edu/~amitp/GameProgramming/Variations...
[+] [-] feoren|4 years ago|reply
[+] [-] self-symmetric|4 years ago|reply
The code (and animations) are here: https://github.com/matthew-piziak/spacepath
It shows a spaceship finding time-optimal paths around asteroids, with nothing but A* doing the pathing.
[+] [-] sapein|4 years ago|reply
[+] [-] two_poles_here|4 years ago|reply
[+] [-] DoryMinh|4 years ago|reply
[+] [-] amitp|4 years ago|reply
[+] [-] wingman-jr|4 years ago|reply