If you’re a fan of search engine optimization (SEO), run a blog, have a website, or use the Internet at all, then listen up. The folks at Google are endeavoring to integrate an incredible new way to help you find and be found online using images.
Basically, a recent competition called the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has brought us a whole new way of viewing (literally) images online. Done through a variety of next-generation algorithms and computer programming, there’s a whole new way Google may index your website’s images.
How Your Website Images Currently Work
Here’s how it probably works for you, more or less. You write a blog post or need to update an image on your website. You go to a search engine for images like Google Images or Flickr and then punch in your keywords and do some filtering and scrolling. You find an image, do some adjustments if needed, add it to your site, source it, and publish it. The image you found is certainly good enough and makes sense for your needs. Job well done, right?
But what if you could actually find dozens of high-quality and highly relevant images for your website? What if that unlocked a few more ideas for future blog posts or put you in touch with another website that is right up your alley? Well, that’s the idea behind the future of Google Images search.
How Google Images Search May Someday Work
The goal of the projects at this competition? To push “beyond annotating an image with a bag of labels” and figure out a way to make image search results as relevant as possible. What’s this mean for SEO and how you’re going to be uploading images? Well, it actually means a bit less work for you. That’s because Google is going to potentially do the work for you. The mega-computers at the Googleplex will crawl, analyze, and automatically catalog the contents of your image rather than relying on whatever data you provide. This is great news for people who have been sorta gaming the ‘image search’ system by mislabeling images or adding a pantload of keywords to images in order to get them found easier.
Here’s a few examples of how Google will view your pictures, according to the work done at the ILSVRC:
About The New Technology
This technology is not yet integrated or live on Google just yet (I don’t think) but it’s easy to see why it could be useful for the Images search functionality. Here’s a bit more about the geekier side of things courtesy of the official Google blog:
This work was a concerted effort by Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Drago Anguelov, Dumitru Erhan, Andrew Rabinovich and myself. Two of the team members — Wei Liu and Scott Reed — are PhD students who are a part of the intern program here at Google, and actively participated in the work leading to the submissions. Without their dedication the team could not have won the detection challenge.
These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand what is in an image as well as where things are.
I look forward to seeing if and when this tech gets used on a global scale. It’ll be very interesting to see how search results change!