There is currently a great deal of talk about Google’s gradual move towards ‘semantic search’. This necessitates collecting a higher level of personal data in order to increase its accuracy and functionality, a move that is receiving a somewhat mixed reaction.

The technology surrounding semantic search is based upon the creation of an algorithm that will allow the search engine to rationalise search queries in a similar way to the manner in which a human would process them. This contrasts sharply with pure text based search wherein a list of ‘hopefuls’ is provided for the user to filter for themselves.

Matt Cutts, the head of Google’s Webspam team, first posted a video response around the subject of semantic search back in April 2010. However, last month Google Enterprise Vice President Amit Singh revealed in a Wall Street Journal article that Google has been building a database of “entities” (e.g. objects, people and places) that can be used to deliver search results that better reflect “how humans understand the world”. How? Through the creation of previously-ignored associations between these entities in search algorithms.

So what exactly does semantic search mean and how will it impact on the future of SEO? Here are a few examples that will help to define it:

  • Synonyms – The semantic algorithm would enable synonyms to be put into the right context within a sentence and to differentiate between various meanings of the same word based upon the words around it.
  • Morphologies – Morphological variations include plurality and different tenses. A semantic search engine would recognise that the same results are expected regardless as to the value or tense of the words used within a search query.

In a similar manner, it would also ignore ‘irrelevant’ words in a phrase that has been entered as a question and discount the superlatives that a ‘natural language’ search query will routinely include, common examples of which include ‘best’.

  • Results Pages – Results shown on the SERPs for a semantic search would be ranked by relevance using an algorithmic analysis that would consider not just the page content, but also its credibility in terms of the author and the ranking of its location.In addition, it would highlight the section of the document that was relevant to the search query rather than merely locating the document and leaving the user to find the relevant part of it.
  • SEO Impact – When these changes begin to take effect, SEOs will need to consider even more carefully the web content that is displayed on any given page and how best to relate it to potential search queries. This will mean trying to ‘second guess’ the questions a searcher is likely to ‘ask’ and ‘answering’ them within the content.

It goes without saying that many sites are going to require serious revision of their content, keywords and page layout in order to maintain their current rankings.

On the positive side of things, sites which already have a reputation for credibility and are considered ‘trusted’ will continue to be seen as such, provided that they continue to offer unique, relevant content and ensure that it is correctly labelled.

Did you find this page useful?

Comments

About the author:

ClickThrough is a digital marketing agency, providing search engine optimisation, pay per click management, conversion optimisation, web development and content marketing services.