Twenty years  ago, when webcams first hit the market, I made up some special slides (yes,  35mm slides!) to illustrate a story I used in several speeches.
  
I began the  story by asking a question that has always mystified me: What did my dog do  when I wasn’t home? Well, I said, I could install a webcam in the hall and  watch. But then, maybe I should tie the webcam to the dog’s head so I could  follow his field of view. In my speech, I would then show a charming picture of  my dog with the webcam attached to his head. Everyone would laugh. Even better,  let’s pass a law that all dogs had to have GPS-enabled webcams. Then when  anything interesting was happening in the world you only needed to tune in to  the nearest dog to watch. I then showed another picture of heads of state at a  summit meeting with my dog off to the side wearing his webcam.
  
Well, twenty  years have passed and we don’t need dogs now—we have people with smartphones. And  people are everywhere. We’ve come a long way since some students at Cambridge  University installed a webcam in their coffee room. For a while, back then, such  public webcams were a fascination. I don’t think they are especially popular now—there  is too much else to do than watch nothing happening somewhere else.
  
But when  something does happen, all those cameras constitute an evolving capability that  we have yet to fully exploit or even understand. Unlike my proposed cam-enabled  dogs, people choose their pictures and videos. For better and for worse, we can  far outdo my imaginary dogcams.
  
Several  billion people with cell phones will be moving around in a world characterized  by an evolving visual ubiquity that also includes increasingly dense real-time  CCTV surveillance, satellite imagery, and street-view photography. It’s amazing  how quickly all this has happened. It seems only a few years ago that our main  visual connection to far-away places was National  Geographic magazine.
  
A hint of  what we can do with all this imagery was seen in the recent Darpa balloon  challenge. Ten red balloons were put in the air at various random places within  the continental United States. A US $40 000 prize would go to the first competitor  to locate all the balloons. Amazingly, it took a team from MIT only nine hours.  This feat seems extraordinary when you consider that there are about ten  million square kilometers in the country. MIT solved two problems: how to  incent a lot of people to help, and how to filter all of the input.
  
Another  example is in the work of the organization Ushahidi, which uses crowdsourcing  to aid humanitarian efforts. Following the devastating 2010 earthquake,  Ushahidi began an open source project to develop an accurate crisis map of  Haiti by compiling and integrating realtime reports from volunteers in the  local neighborhoods. Their contributions to the humanitarian efforts were  remarkable, and could not have been obtained otherwise.
There are  ongoing technological developments that can augment this visual ubiquity. Face  recognition, and the ability to track individuals through crowds and across  multiple cameras, as in airport security, need algorithmic development to be more  real-time and less labor-intensive. There are also efforts to stitch together  the billions of amateur-provided photos on the web to create a worldwide  panorama.
  
But after these twenty years I still don’t have a home webcam. I’d like to, but I don’t know what to do with it. I could point it at my driveway, which stretches westward for some distance through trees, but frankly nothing ever happens there—like most of the planet, I imagine. Maybe once a year my camera might see the glint of a deer’s eyes. I know that there are companies that offer cloud-based services to filter out all the nothingness using change-detection algorithms, but I fear that I would be left with only an annual summary that says—to borrow a literary phrase—“All Quiet on the Western Front.” But then, you never know. And maybe my dog does something interesting while I’m gone, too.