Brighton SEO: Technical Takeaways

Facebook Twitter Google + LinkedIn

Jade Coleman, senior technical SEO specialist, attended this year’s Brighton SEO conference. Here, she feeds back on her favourite keynote speakers and industry learnings.

This April I took the 185 mile drive down to the Dome for my first Brighton SEO conference and it was BRILLIANT!

Brighton SEO is a conference where SEO professionals, newbies and hobbyists gather to share knowledge, insights and opinions on all matters SEO and digital.

There were many sessions going on throughout the day; with often as many as five sessions running at the same time, ranging from talks on links, content, e-commerce, UX, international and much more. As a technical specialist it made sense for me to pitch a seat in the Dome Studio Theatre for the day, for sessions from keynote speakers on crawl, onsite and technical.

Here are some of my key takeaways, with a handy star rating, based on how technical I thought the talks were:

Hunting for Googlebot – The Quest for ‘Crawl Rank’ - Dawn Anderson

*basic technical

It’s obvious for all internet savvy people that the ability to self-publish has grown considerably over the past 10 years, and that’s not news for the internet savvy. This makes the web extremely large. So large, that it’s difficult for search engines to crawl – a huge issue for Google.

Programmatically generated content is boring to crawl – Google will get bored so don’t create it.

Key takeaway: Crawl budget is shared between bunches of sites on the same IP therefore not necessarily related to SEO. Something that became apparent to me at the conference.

Brighton SEO Dome Theatre

Server Logs After Excel Fails - Oliver Mason

***very technical

In Oliver Mason’s words “This talk is on the first significant difficulty spike in server log analysis – having too much information”. This talk wasn’t for just SEO savvy people; it required some knowledge on coding.

Server logs show what is being accessed and what crawling tools don’t show us - a great way to identify crawl waste.

Quite often with large files, Excel fails us. Oliver showed us some command line tools and how to apply these tools to common scenarios, but this came with a warning - be careful these can delete files on your computer!

Key takeaway: There is a correlation between improvements to crawl efficiency and organic performance.


How to Identify & Fix Crawl Optimisation Issues - Barry Adams


This session from Barry Adams was one of my favourites of the Brighton SEO conference. Barry spoke about what crawl optimisation is.

He emphasised the importance of not wasting Google’s time crawling your website and don’t waste crawl budget by creating pages that aren’t really worth it.

We were shown how to identify crawl waste using Deep Crawl, how to look at the amount of URLs compared to the amount of Unique Pages, how to ensure you’re not wasting crawl budget and much more.

Here are some of Barry’s top tips:

  • Optimise XML sitemaps. Ensure your sitemap contains final URLs only – no redirects or non 200 status codes
  • List more items on one page and implement rel=prev/next meta tags. Block any sorting parameters by robots.txt. For example, where you can sort by price
  • Optimise Faceted Navigation by deciding on which facets have SEO value - for the ones that do build static pages for these. All other facets should be robots.txt disallow ‘rel=nofollow’ on facet links
  • Internal Site Search Pages should be blocked in robots.txt User-agent: * Disallow: /SearchResults.aspx Disallow: /*query=* Disallow: /*s=*
  • Avoid internal redirects
  • Use Canonicals Wisely the “rel=canonical” tag is primarily for indexation issues - it is not a fix for crawl waste
  • DON’T use Canonicals for faceted navigation, pagination and sorting or site search pages
  • Optimise load speed

Key takeaway: If you waste crawl budget, the right pages are unlikely to be crawled and indexed.

Brighton SEO Auditorium

Guaranteeing Success with your Onsite Strategy - James Perrott

**intermediate strategy/content

Keyword mapping is important and should form the basis of any strategy.

James emphasised the importance of onsite strategy, and if you get this wrong your SEO will fail.

Key Takeaway: Employ a six-step process for your onsite strategy:

  1. Exhaustive Keyword Research
  2. Category Architecture
  3. Review Keyword-to-URL Mapping
  4. URL-specific Optimisation
  5. Functional Content Delivery
  6. Measure and Refine


International Targeting with Hreflang Tags – Emily Mace


Emily emphasized that quite often Hreflang tags are used as a geo-targeting tool – but that needs to stop because they’re not.

So what is a hreflang tag? This tag lets Google know what language you are using on the pages of your website so that Google can serve that page to users searching in that specific language.

The hreflang tag resolves duplicate content and identifies regional differences in language (Spanish for Spain and Spanish for Mexico, for example)

  • Bing doesn’t support Hreflang
  • Always put language first in tag
  • Follow ISO 639-1

Key takeaway: Emily went on to talk about the X-default option. Many webmasters use this to identify the default language of the website, what they feel is the ‘most important’, but that’s incorrect. X-default should only really be used on your language selector page.

Brighton SEO Lunch

And last but not least what I found to be the most interesting talk from the day from Colin Woon, Head of SEO at Telefónica, O2 UK.

How afraid should SEOs be of JavaScript – Colin Woon

***very technical

There are many sites that won’t work without JavaScript enabled and just 0.2% of people have JavaScript disabled. Often the visible URLS that we see may change but server side URLS may stay the same.

Colin used a great example of how JavaScript does work. He took an example from the O2 website. On product pages there is a JavaScript container for Bazaarvoice reviews. As they are in a container the content of each review is not on the page. They sit on Bazaarvoice. Colin took some text from the review on the O2 website and searched for it in Google with quotation marks. The exact O2 page he took this from appeared in the search results – proving that JavaScript is crawled.

Key takeaway: If you use # in URLs – change this to ? and they will rank.

Brighton SEO Busy

Brighton SEO was a great learning experience with a fabulous line-up of speakers and a great turn out of delegates. I look forward to returning to the event for a future edition.

Did you attend Brighton SEO? If so, what were your key learnings from the event? Share your comments in the box below.

Interested in learning more about our Technical SEO & Audits service? Contact us to find out more.

Subscribe to our blog and get the latest industry-updates direct to your inbox