Just a regular summary of what’s been happening around google search specifically for website owners, publishers and SEOs.
I hope your year both ended well and is starting off well what a unique year. It was right if you’re watching this in the far future, then first off congratulations for making it that far.
In this post, we’ll be covering some neat new things around the foundation of search, namely crawling and indexing, as well as another relevant part of search, namely links if you’re curious to find out more than stay tuned.
A bit of background crawling is when Googlebot looks at pages on the web, following the links that it sees there to find other web pages indexing.
The other part is when google systems try to process and understand the content on those pages. Both of these processes have to work together, and the barrier between them can sometimes be a bit fuzzy.
Let’s start with news about crawling: while Google have been crawling the web for decades, there’s always working on to something to make it easier, faster or better understandable for site owners in search console. Google recently launched an updated crawl stats report.
Google search console is a free tool that you can use to access information on how google search sees and interacts with your website. This report gives site owners information on how Googlebot crawls a site.
The report covers the number of requests by response code and the crawl purposes host level, information on accessibility, examples and more. Some of this is also in a server’s access logs, but getting and understanding them is often hard.
We hope this report makes it easier for sites of all sizes to get actionable insights into the habits of Googlebot.
Together with this tool, Google also launched a new guide specifically for large websites and crawling as the site grows crawling can become harder, so Google compiled the best practices to keep in mind.
You don’t have to run a large website to find this guide. Useful, though, if you’re keen and finally still on the topic of crawling Google have started crawling with http 2
Http 2 is an updated version of the protocol used to access web pages. It has some improvements that are particularly relevant for browsers and we’ve been using it to improve our normal crawling too.
Google have sent out messages to websites that were crawling with http, 2 and plan to add more over time. If things go well, as you can see, there’s still room for news in something as foundational as crawling.
Let’s move on to indexing, as mentioned before, indexing is a process of understanding and storing the content of web pages so that Google can show them in search results appropriately for indexing we have two items of news to share with you today.
First requesting indexing in the URL inspection tool is back in search console. You can once again manually submit individual pages to request indexing.
If you run into a situation where that’s useful for the most part, sites should not need to use these systems and instead focus on providing good internal, linking and good sitemap files.
If a site does those well, then google systems will be able to crawl and index content from the website quickly and automatically.
Secondly, in search console, Google have updated the index coverage report significantly with this change,
Google have worked to help site owners to be better informed on issues that affect the indexing of their site’s content.
For example, Google have removed the somewhat generic crawl anomaly issue type and replace it with more specific error types.
Google uses links to find new pages and to better understand their context in the web. Next to links, Google use a lot of different factors in search, but links are an integral part of the web, so it’s reasonable that sites think about them.
Google’s guidelines mention various things to avoid with regards to links such as buying them, and Google often get questions about what sites can do to attract links.
Recently, we ran across a fascinating article from giselle navarro on content and link building campaigns that she saw last year.
While Google obviously can’t endorse any particular company that worked on these campaigns, we thought they were great examples of what sites can do it’s worth.
Taking a look at these and thinking about some creative things that you might be able to do in your site’s niche.
Content isn’t always easy, but it can help you to reach a broader audience and who knows? Maybe get a link or two and just a short note on news about structured data,
Google have decided to deprecate the old structure data testing tool and to focus on the rich results test in search console.
The good news is that the structured data testing tool isn’t going away, but rather finding a new home in the schema.org community, and that’s all for now folks.
Hi, my name is Lin,
I’m the campaign manager at advalley
sharing my insights and experience from
helping Blockchain Technologies grow online.