Technical SEO is an awesome field. There are so many little nuances to it that make it exciting, and its practitioners are required to have excellent problem-solving and critical thinking skills.
In this article, I cover some fun technical SEO facts. While they might not impress your date at a dinner party, they will beef up your technical SEO knowledge — and they could help you in making your website rank better in search results.
Let’s dive into the list.
1. Page speed matters
Most think of slow load times as a nuisance for users, but its consequences go further than that. Page speed has long been a search ranking factor, and Google has even said that it may soon use mobile page speed as a factor in mobile search rankings. (Of course, your audience will appreciate faster page load times, too.)
Many have used Google’s PageSpeed Insights tool to get an analysis of their site speed and recommendations for improvement. For those looking to improve mobile site performance specifically, Google has a new page speed tool out that is mobile-focused. This tool will check the page load time, test your mobile site on a 3G connection, evaluate mobile usability and more.
2. Robots.txt files are case-sensitive and must be placed in a site’s main directory
The file must be named in all lower case (robots.txt) in order to be recognized. Additionally, crawlers only look in one place when they search for a robots.txt file: the site’s main directory. If they don’t find it there, oftentimes they’ll simply continue to crawl, assuming there is no such file.
3. Crawlers can’t always access infinite scroll
And if crawlers can’t access it, the page may not rank.
When using infinite scroll for your site, make sure that there is a paginated series of pages in addition to the one long scroll. Make sure you implement replaceState/pushState on the infinite scroll page. This is a fun little optimization that most web developers are not aware of, so make sure to check your infinite scroll for rel=”next” and rel=”prev“ in the code.
4. Google doesn’t care how you structure your sitemap
As long as it’s XML, you can structure your sitemap however you’d like — category breakdown and overall structure is up to you and won’t affect how Google crawls your site.
5. The noarchive tag will not hurt your Google rankings
This tag will keep Google from showing the cached version of a page in its search results, but it won’t negatively affect that page’s overall ranking.
6. Google usually crawls your home page first
It’s not a rule, but generally speaking, Google usually finds the home page first. An exception would be if there are a large number of links to a specific page within your site.
7. Google scores internal and external links differently
A link to your content or website from a third-party site is weighted differently than a link from your own site.
8. You can check your crawl budget in Google Search Console
Your crawl budget is the number of pages that search engines can and want to crawl in a given amount of time. You can get an idea of yours in your Search Console. From there, you can try to increase it if necessary.
9. Disallowing pages with no SEO value will improve your crawl budget
Pages that aren’t essential to your SEO efforts often include privacy policies, expired promotions or terms and conditions.
My rule is that if the page is not meant to rank, and it does not have 100 percent unique quality content, block it.
10. There is a lot to know about sitemaps
- XML sitemaps must be UTF-8 encoded.
- They cannot include session IDs from URLs.
- They must be less than 50,000 URLs and no larger than 50 MB.
- A sitemap index file is recommended instead of multiple sitemap submissions.
- You may use different sitemaps for different media types: Video, Images and News.
11. You can check how Google’s mobile crawler ‘sees’ pages of your website
With Google migrating to a mobile-first index, it’s more important than ever to make sure your pages perform well on mobile devices.
Use Google Console’s Mobile Usability report to find specific pages on your site that may have issues with usability on mobile devices. You can also try the mobile-friendly test.
12. Half of page one Google results are now HTTPS
Website security is becoming increasingly important. In addition to the ranking boost given to secure sites, Chrome is now issuing warnings to users when they encounter sites with forms that are not secure. And it looks like webmasters have responded to these updates: According to Moz, over half of websites on page one of search results are HTTPS.
13. Try to keep your page load time to 2 to 3 seconds
Google Webmaster Trends Analyst John Mueller recommends a load time of two to three seconds(though a longer one won’t necessarily affect your rankings).
14. Robots.txt directives do not stop your website from ranking in Google (completely)
There is a lot of confusion over the “Disallow” directive in your robots.txt file. Your robots.txt file simply tells Google not to crawl the disallowed pages/folders/parameters specified, but that doesn’t mean these pages won’t be indexed. From Google’s Search Console Help documentation:
You should not use robots.txt as a means to hide your web pages from Google Search results. This is because other pages might point to your page, and your page could get indexed that way, avoiding the robots.txt file. If you want to block your page from search results, use another method such as password protection or noindex tags or directives.
15. You can add canonical from new domains to your main domain
This allows you to keep the value of the old domain while using a newer domain name in marketing materials and other places.
16. Google recommends keeping redirects in place for at least one year
Because it can take months for Google to recognize that a site has moved, Google representative John Mueller has recommended keeping 301 redirects live and in place for at least a year.
Personally, for important pages — say, a page with rankings, links and good authority redirecting to another important page — I recommend you never get rid of redirects.
17. You can control your search box in Google
Google may sometimes include a search box with your listing. This search box is powered by Google Search and works to show users relevant content within your site.
If desired, you can choose to power this search box with your own search engine, or you can include results from your mobile app. You can also disable the search box in Google using the nositelinkssearchbox meta tag.
18. You can enable the ‘notranslate’ tag to prevent translation in search
The “notranslate” meta tag tells Google that they should not provide a translation for this page for different language versions of Google search. This is a good option if you are skeptical about Google’s ability to properly translate your content.
19. You can get your app into Google Search with Firebase app indexing
If you have an app that you have not yet indexed, now is the time. By using Firebase app indexing, you can enable results from your app to appear when someone who’s installed your app searches for a related keyword.
Staying up to date with technical SEO
If you would like to stay up to date with technical SEO, there are a few great places to do that.
- First, I recommend you watch the videos Barry Schwartz does each week.
- Second, keep your eye on Search Engine Land.
- Third, jump on every blog post Google publishes on Google Webmaster Central.
- Finally, it is always a good idea to jump into a Google Webmaster hangout or simply watch the recording on YouTube.
I hope you enjoyed these 19 technical SEO facts. There are plenty more, but these are a few fun ones to chew on.
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.
19 technical SEO facts for beginners