It’s now six months since Google launched its Panda update on search results.

The update was intended to ‘reduce rankings for low-quality sales-sites which are low-value add for users; copy content from other websites; or websites that are just not very useful.’

However, Frank Watson, writing for Search Engine Watch, has revealed that the update initially impacted on a number of genuinely good sites – sites which are only just starting to regain their traffic now.

In order to stay in line with Google’s Panda update, Watson recommends ‘Google’s Webmaster Guidelines’ as a focal starting point for research, arguing ‘if you have not read Google’s Webmaster Guidelines there is a good chance you have not done all your homework.’

Subsequent updates, such as Panda 2.3, have seen the algorithm employed by Google become more forgiving to genuine content sites – as it possesses the ability to differentiate between both topic and language.

Watson has emphasized that Panda can be countered, providing that you know your industry.

He concludes that the original algorithm was similar to that of academic essay referencing – the sources cited most generally being the highest in quality – and ultimately: “Panda is an attempt to return to these early methods, but Google can now apply latent semantics and understands duplicated content better.”

With the introduction of Panda, Google’s algorithm changed and therefore so has the face of SEO. To be successful in the field, it is crucial that these changes are taken into account sooner rather than later.

News brought to you by ClickThrough – a provider of SEO Services & Pay Per Click strategies.

Did you find this page useful?

Comments

About the author:

ClickThrough is a digital marketing agency, providing search engine optimisation, pay per click management, conversion optimisation, web development and content marketing services.