“Technology... is a queer thing. It brings you great gifts with one hand, and it stabs you in the back with the other.” -
C.P. Snow
You may recall a few weeks ago when
I blogged about an image I had found on the web and did not know who it was painted by. I bemoaned the fact that I wasn’t able to upload it and get Google to search the web for me to find out what it was (I since did find out what the image was through some detective work and the help of one of my readers). Well, it looks like that I did not have to wait that long to see my request made into reality! Well, in principle, anyway!
Google has launched a new application for the
Google Android mobile phone that allows you to search for more information about a landmark by taking a picture of it with your Android phone and submitting it to a Google application known as
“Google Goggles”. At this stage, the application can recognise landmarks, works of art, books, wine labels and company logos. In the near future, I can see it recognising famous faces, and as we move into the future other images will taggle along…
The way that it works is that when the user takes a picture of the feature in question, the phone sends it to the Google databases where elements of the photographed image are compared with features of images on the Google databases. When a match is made, Google notifies the user what they are looking at and provide a list of web references and news stories relating to that identified item. What also helps is that Google can use the user’s location (through the GPS locator in the phone) to aid in the ID process (take a picture of a faraway landmark in a poster at your place of residence and see if that will confuse the poor dear!).
Google maintains that tens of millions of locations, landmarks, logos, etc can be recognised. As I pointed out in my earlier blog, searching by an image is so much more convenient in many cases, and an image search on a mobile phone through a captured image can be so much easier than text searches.
The whole concept brings to the fore the developing technology in computer vision (and by extension of course, robot vision). This is technology still in its infancy, but one can see the tremendous potential of applications such as Google Goggles. We may be soon approaching the time where we may simply point our finger at something and through our special decorative ring-cum-camera-cum-phone, and then through our speaker-enabled sunglasses hearing a description of what we are pointing to…
Google has also started to add real-time results to its search engine, channelling feeds from Facebook, Twitter, MySpace, and other user content that has just been added by web users, in response to queries. This means that the person doing the search gets answers to their query on a results page as the content is being generated on the source website. Google once again claims that this is the first time a search engine has integrated
real-time web-content into a web search results page.
Once again, this latest development raises some questions. How reliable will such search results be, if all sorts of live, real-time results are given from the myriad of blogs, tweets, and other content, which may be (and often definitely is) quite spurious? How do we tell rubbish is rubbish? There was much adverse publicity lately about the reliability of information in Wikipedia. This was because of malicious feeding of specious or fallacious information into Wikipedia articles by malfeasants and other people with ulterior motives. We live in an age of excess information. Being able to filter this information and derive form it the useful, genuine and reliable bits is quite an art. It will become an even bigger art in the future as we are surrounded by even more information, which will become increasingly more easily available. How do we go about navigating through this dangerous sea of excess data? Will this superabundance of information be a boon or a curse?
What do you think?