I helped out with this project, happy to answer any questions. I was involved from the beginning, but my biggest contribution was on the deep learning side that does the tile-matching. I helped with the initial prototype using DIGITS, Caffe, and a bunch of Python. Then Aman Tiwari moved us to a more accurate 34-layer ResNet trained in TensorFlow, and a more efficient nearest neighbor lookup using a new implementation of CoverTree search.
Since you've opened up yourself to questioning :)...Thanks for sharing, I apologize for this very laymanny question:
I clicked on a swimming pool in New York City...there aren't a ton of them in NYC, but very few of the matches have even a spot of blue in them...I know the algorithm is more than just "look for more blue patches"...if I were to explain this to another layperson, what is the most obvious explanation for something that seems more non-intuitive than expected?
good stuff. As far I can see, the current version searches for similar tiles to one that a user clicks on.
How would I train this with labeled training data and custom images, so that I could do searches for specific things, either by typing what I'm looking for into a search field, or uploading an image of it? Sort of like Google images search
The imagery in their demo is coming from Google Maps, but if you look at the copyright you can see what companies it is coming from. The USDA shoots, if I remember correctly, the whole country every 3 years, and that data is public domain. The commercial sites are constantly shooting new stuff, but it depends on how often Google wants to pay for it. With their purchase of Skybox though, they will likely start getting imagery updated on an extremely frequent basis.
This is quite fun, thanks for sharing. I would be interested in trying it out with different tile sizes too. I keep trying to pick objects that are on the corners of tiles.
[+] [-] kcimc|9 years ago|reply
[+] [-] danso|9 years ago|reply
I clicked on a swimming pool in New York City...there aren't a ton of them in NYC, but very few of the matches have even a spot of blue in them...I know the algorithm is more than just "look for more blue patches"...if I were to explain this to another layperson, what is the most obvious explanation for something that seems more non-intuitive than expected?
Screenshot of my panel: http://imgur.com/H0wo5jK
http://nyc.terrapattern.com/?_ga=1.84865689.1830936426.14642...
[+] [-] dharma1|9 years ago|reply
How would I train this with labeled training data and custom images, so that I could do searches for specific things, either by typing what I'm looking for into a search field, or uploading an image of it? Sort of like Google images search
[+] [-] flexie|9 years ago|reply
How easy would it be to roll it out on all other places on Earth than the current 4 cities?
[+] [-] polartx|9 years ago|reply
When they are updated, does Terrapattern recognize the change an update the corresponding photos?
[+] [-] Bedon292|9 years ago|reply
[+] [-] maxerickson|9 years ago|reply
http://mvexel.dev.openstreetmap.org/bing/
As reported in a sibling comment, they are using Google imagery.
[+] [-] workergnome|9 years ago|reply
[+] [-] Bedon292|9 years ago|reply
[+] [-] tudorw|9 years ago|reply