Tom Williams is back with another round of SEO news and insights. This week, he digs up more info on Google’s mobile usability ranking factor, covers the company’s new car insurance comparison service in the US, and much more.

Google’s New Mobile Usability Algorithm: What We Know

A few new details have emerged of Google’s mobile usability ranking factor, which is set to launch on April 21.

Here’s what we now know:

The ranking factor runs on a page-by-page basis

Google’s Gary Illyes confirmed on Wednesday that the new ranking factor would be applied on a page-by-page basis. This means that only pages that are mobile-optimised will benefit from any potential ranking boosts – not a whole site.

On the other side of the coin: If your site has some mobile-unfriendly pages, but is generally well optimised for mobile users, you can relax in the knowledge that these un-optimised pages won’t drag your site down the rankings.

Illyes was speaking at a Mobile SEO panel during SMX West, as reported by Search Engine Land (SEL).

It runs in real-time

Also confirmed by Illyes was the fact that the ranking factor runs in real time. There are no big ‘data pushes’ that have to run before there’s a chance of a ranking uplift.

This doesn’t mean changes to your site will be acted on immediately. Google will still have to crawl your site and recognise the change before you see any benefits.

Desktop ranking signals are still important

SEL also reported that Google is still relying on desktop ranking signals for parts of its mobile usability ranking algorithm.

Barry Schwartz wrote: “Factors like page speed signals for ranking are based on the desktop version, not the mobile version.”

However, he also reported that Google was experimenting with separating out these signals on a “mobile versus desktop basis”.

Google Launches Car Insurance Comparison Service in California; Backtracks On Optimised Anchor Text Promoting the Service

As predicted, Google launched its car insurance comparison service in the US on Thursday.

The Google Compare service allows customers to weigh up car insurance quotes from several selected providers. Because the service is given favourable visibility in search results, it is likely to ruffle a few feathers in the car insurance world, where SEOs have worked hard to gain good rankings for valuable ‘car insurance comparison’-related keywords.

It’s a touchy subject, considering that Google managed to annoy a fair few SEOs with its blog post announcing the service.

The issue, as SEL reports, was keyword-rich anchor text in links – an SEO technique that Google has cracked down on, when it’s used for large-scale link building.

Google wasn’t using keyword-rich anchor text as part of a widespread “link scheme”, so it wasn’t in violation of its own guidelines. However, it did decide to change the links to use innocuous anchor text – probably, as Barry Schwartz suggests, “to avoid any confusion and controversy in the industry”.

Here are some screenshots of the changes made:

Google's car insurance blog before the anchor text was changed.

The announcement blog, before anchor text was changed. Source: SEL.

Google's car insurance blog after the anchor text was changed.

The same blog, after anchor text was changed. Source: SEL.

Car insurance comparison through Google Compare has been available here in the UK for some time now. The US launch is, for the time being, limited to California – though more states to follow shortly.

Google Wants to Use Facts, Not Links, To Rank Webpages

Google has proposed a method by which webpages could be ranked by the trustworthiness of the facts it presents, rather than by ‘exogenous’ signals, such as the number of links pointing to it.

As New Scientist reported, the link-based PageRank model has allowed “websites full of misinformation [to] rise up the rankings, if enough people link to them”.

But Google’s proposed new system, detailed in a research paper, uses a score dubbed ‘Knowledge-Based Trust’ to rank the factual accuracy of a webpage’s content, “using joint inference in a novel multi-layer probabilistic model”.

Using this model, the research paper claims, Google has estimated the trustworthiness of 119 million webpages using a database of 2.8 billion facts. “Manual evaluation of a subset of the results confirms the effectiveness of the method,” write the paper’s authors.

The impetus for using links as a ranking signal is that, in an ideal world, they act as a ‘personal endorsement’ from the website that creates the link. However, as Google’s continuing battle against link building proves, this simply isn’t the case.

It may seem that a fact-based ranking system would turn the world of SEO on its head. However, the act of creating trustworthy, useful pages isn’t too far from Google’s current recommendations for SEO best-practice – create great content that people want to link to.

And although this model would make sense for purely informational websites, it isn’t clear how this would work for commercially-focussed sites.

The Death of the Meta Description? Google Testing Results with Descriptions Removed

Gianluca Fiorelli spotted a weird Google test last week – a normal search results page with the description text removed:

Does this spell the end of meta descriptions? We’d say no – we can’t see how this would be much use for searchers, as it only makes it more difficult to gauge whether a website is worth clicking.

Google is always conducting tests, the majority of which are never rolled out for real. Here’s another one to add to the pile.

Are Google’s AJAX Recommendations Obsolete?

More news from SMX West: SEL reports that Google may soon decommission its AJAX crawlability recommendations, as outlined in its Webmasters developer guidelines.

Currently, Google recommends creating ‘HTML snapshots’ to make dynamically generated AJAX content crawlable by search engines.

However, it’s likely that this recommendation is becoming obsolete as Google gets better at ‘fetching and rendering’ content.

Before Google improved its crawling capabilities, its spiders could only see the HTML behind a webpage. But with ‘fetch and render’ they are capable of ‘seeing’ websites as a user would see it.

As Barry Schwartz writes at SEL:

It is unclear if Google will completely stop crawling AJAX in this fashion or slowly phase it out as it gets better and better at rendering and interpreting more complex forms of JavaScript.

More Search Engine Optimisation News and Insights

Read last week’s SEO news roundup: Google Announces Mobile Ranking ‘Algorithm’; Flags Slow, Mobile-Unfriendly Sites

Download your free technical SEO guide, featuring 32 pages of best-practice advice for solid on-site SEO.

Did you find this page useful?

Comments

About the author:

Tom joined ClickThrough in 2011. Since then, he has developed an expertise in the technical side of search engine optimisation. He’s Google Analytics-qualified, and in his current role as digital and technical Executive, carries out monthly SEO activities and provides technical consultancy for several of the company’s largest accounts.