Terrapattern is a prototype that provides similar-image search for satellite photos—an open-source, open-ended tool for exploring the unmapped, and mapping the unmappable. Click an interesting spot in a map of New York, San Francisco, Detroit or Pittsburgh, and Terrapattern will find other locations that look similar in that city.
The Terrapattern tool is ideal for locating specialized nonbuilding structures and other forms of otherwise unremarkable soft infrastructure that aren't usually indicated on maps. For example, one of our friends is using it to find disused swimming pools—for guerilla skateboarding.
More generally, we hope you can help us understand how the Terrapattern project could useful to you! We especially invite citizen scientists, data journalists, humanitarian researchers, and other domain experts to tell us about how our app is, or could be, of use. For some of the case studies which inspired us, please see our about page. To share some of your own ideas, complete this brief survey.
Behind the scenes, Terrapattern's search is based on two tricks.
The first trick is a deep convolutional neural network (DCNN). We feed the DCNN hundreds of thousands of satellite images that have been categorized in OpenStreetMap, teaching it to predict the category of a place from a satellite photo. In the process, it learns which visual features are important for classifying satellite imagery. After training, we compute descriptions for millions more satellite photos that cover various regions of interest, such as New York City. When we want to find places that are similar to your query, we just find places with similar descriptions.
It can take a long time to search all the descriptions, so we have another trick. The SG Trees algorithm precomputes relationships between the descriptions, allowing us to do a search in just a second or two.
Sure. Here are some reasons why that may be true:
We'd love to hear your stories and feedback! If you discover something interesting, send a tweet with the hashtag #terrapattern and the URL of your search. We also invite you to complete this brief visitor survey to tell us what you think!
We suspect that most of the big players in the space of satellite imaging, such as Google, Microsoft, Digital Globe, Planet.com, and others are exploring the opportunities afforded by machine learning—particularly in light of recent and significant advances in convolutional neural networks and other deep learning techniques.
One of the main features which distinguishes the Terrapattern project is our emphasis on allowing our visitors to search, in an open-ended way, for user-defined ("out-of-set") categories. By contrast, most of the systems listed in our reference page are designed to locate and identify specific things-with-names, such as roads, trails, or crosswalks. For more information, please see our about page.
We're adding more soon! But for this alpha prototype, this is the scale we could achieve. Storing the model data for each metro region requires about 10GB of active RAM. (That's RAM—not hard disk.) To store and serve a searchable model for (say) the entire United States would require nearly 2,000 times as much RAM and CPU power as we're currently leasing, and the development of a much more sophisticated software architecture as well. Think of the Terrapattern project as a proof-of-concept probe to test how (or whether) "visual query-by-example" for satellite imagery might become a part of our everyday future. Remember, you saw it here first :)
There are many regions of the world that might benefit from being studied or mapped with a tool similar to ours. The work involved in doing so, however, is not trivial. "Solutionist" approaches may well do more harm than good. We caution interested readers not to say "OMG let's use this for all the humanitarian problems", and instead consider partnering with or studying the work of an informed organization, such as the Harvard Humanitarian Initiative's Satellite Sentinel Project.
For this alpha prototype, we deliberately sidestepped regions suffering from profound humanitarian or other crises, and instead selected cities primarily for their personal signficance. For example, most of our team members currently call Pittsburgh home, and it was easiest to test our tool in familiar territory. We additionally selected New York City, San Francisco and Detroit because so many of our friends and peers live there—especially those exploring new intersections of art, design, journalism, technology, data science and social change. In some cases, at our discretion, we have also presented some cities in response to user requests.
We're a group of new-media artists, creative technologists, and students who are affiliated in various ways with the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University, a lab for experimental research at the intersection of art, science, technology and culture. This project was made possible by a grant in Media Innovation from the Knight Foundation Prototype Fund. For more information, please see our team page.
We first became interested in how satellite imagery could help people make interesting discoveries, in early 2009, when we learned how Dr. Sabine Begall discovered that cows tend to align themselves with the earth's magnetic field. We became motivated to make an open-source tool in 2014, when we learned about how Wall Street traders were using insights from satellite imagery to game the financial system, while ecological and humanitarian initiatives like MAAP and the Satellite Sentinel Project were using it to make the world a better place. Of course, like a lot of people, we've also been very impressed and inspired by Google's "search-by-image" feature.
Terrapattern by Golan Levin, David Newbury, Kyle McDonald, Irene Alvarado, Aman Tiwari, and Manzil Zaheer is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The Terrapattern code and data files are free software and open source, made available under the MIT Licence.
Levin, G., Newbury, D., McDonald, K., Alvarado, I., Tiwari, A., and Zaheer, M. "Terrapattern: Open-Ended, Visual Query-By-Example for Satellite Imagery using Deep Learning". http://terrapattern.com, 24 May 2016.