Google has started highlighting featured results in bold
A new kind of issue can be seen in extended snippets.
Usually Google uses bold to highlight keywords in snippets. But now this issue has begun to highlight products in extended snippets.
As reported by the Alaich Telegram channel , this was first noticed by foreign colleagues, but this innovation works for us too.
There is still no definite answer to such a selection. If it was just an advertisement, then it is unlikely that the same results were observed in other requests. For example, in this case. Google probably offers the most relevant options here.
There was no official Google comment on this innovation, but what do you think about this? Write in the comments!
the time between the first interaction of the user with the page and the response of the browser – First Input Delay, FID.
These metrics can be optimized so that the site is of higher quality and gets a better score from the search engine.
How to optimize your LCP score – speed up content loading
We should strive to ensure that the rendering of the largest element on the page does not take more than 2.5 seconds from the start of the page load. This is considered the optimal indicator of a site that is comfortable to work on.
LCP is influenced by four factors:
server response time;
resource loading time;
client side rendering.
In this article, we have discussed how to optimize each item to arrive at a good LCP score.
How to Optimize CLS: Page Layout Shifts That Disrupt Users
The content on the page can move if some elements are loaded asynchronously: this happens if the webmaster has not allocated enough space for the loaded banner at the top of the page. In this case, loading it will move all content down.
CLS stands for Cumulative Layout Shift and helps you gauge how often users encounter unexpected shifts. The optimal CLS score is no more than 0.1 for 75% of sessions.
In the article, we analyze how to measure the indicator, which shifts are considered normal and how to optimize the indicator.
What affects website loading speed [Research 5.2 Million Pages]
The Backlinko blog team, led by Brian Dean, did some research on Google SERPs to see which acceleration methods are used by the fastest pages. The sample had 5.2 million pages from desktop and mobile, so the result is worth seeing.
Learn more about the findings with graphs and charts in the full blog article. A couple of interesting points:
The average download speed of the first byte (TTFB) is 1.286 seconds on a desktop and 2.594 seconds on a smartphone. Average time to full page load is 10.3 seconds on desktop and 27.3 seconds on mobile.
Oddly enough, the best options are to either compress files as little as possible before sending them from the server, or as much as possible. These pages have better performance than average compression.
For downloads on desktop, the speed is more influenced by the use of CDN, on mobile – by the number of HTML requests.
More interesting information in the full article.
How to reduce website weight and speed up page loading using gzip, brotli, minification and more
Pictures, videos and various interactive elements weigh a lot and slow down the site. You can compress heavy items and speed up loading.
There are compression algorithms for this, the most popular now are gzip and brotli. Brotli compresses harder than gzip and has more compression levels. But at higher levels, its speed is slower.
These compression methods stress the server due to archiving operations, but in general they are faster – they reduce the size of the downloaded data and speed up the loading of the site.
There are also ways to speed up the site: minify, that is, reduce CSS, HTML and JS, set up caching, optimize images – this is all covered in this article.
How to speed up loading: optimizing the code at the top of the page
There is another way to make loading faster – to optimize the code of the upper part of the page, which the user sees first of all when they visit the site. If the top of the page is optimized, the user will see the loading content as early as possible. And the rest can be loaded later.
There are several methods to optimize the code at the top of the page:
remove unnecessary symbols and scripts from the top of the code;
set up asynchronous loading with jQuery;
speed up receiving first bytes (TTFB)
configure loading from the cache on the user side;
All this in the article.
How to optimize images for fast loading
Great SEO Guide for Images
A great detailed article on everything important to do with image optimization. It’s not only about compression and weight reduction, but also about requirements for size, quality, uniqueness and relevant tips for filling in meta tags.
Much of the advice is based on a webinar by Demi Murych, a technical SEO and reverse engineering specialist.
Requirements for pictures:
is the number of pictures on the page important;
how quality affects SEO and what should be the minimum image sizes on the site;
how uniqueness is important for search engines and how to use other people’s images legally;
how the search engine analyzes the subject of images;
How image placement on the page affects SEO.
what image format to choose;
how to set up the choice of a picture by the browser: correctly, not how everyone does it;
how to set up responsive images;
how to set up lazy loading
the best compression methods.
Filling meta tags:
which meta tags must be filled in, and which ones are optional;
how to fill in title and alt;
is the file name important to the search engine.
How to set up lazy loading of images – lazy loading of images
A separate material with a detailed description of setting up lazy loading of images, also called lazy loading. With this implementation, the user does not have to wait until all the content is loaded, because the images will be loaded as they view the page.
There are several configuration options:
While the user scrolls: when he reaches the place where the picture should be, it will be loaded.
When the user clicks on the element: the picture will be loaded if he follows the link or clicks on the preview.
In the background: Content will load gradually, such as when the user opens a document and leaves it. Usually used for large drawings and diagrams.
Pictures are loaded as they are viewed:
The choice of option depends on the behavior of users on the site. In the article, we will analyze whether lazy loading is really necessary, and how to configure it correctly.
WebP format: should I use it for optimization
WebP is a graphics format developed by Google in 2010. The result is an alternative to PNG and JPEG, but with a smaller size and the same image quality. However, in WebP, you can preserve background transparency or animation.
The format is more advantageous in terms of speeding up website loading, but not all browsers support it.
In this article, we have collected all the most important about the WebP format: studies of quality and weight, advantages and disadvantages of the format, browser support, conversion methods, and other topics.
Google spoke about how it detects duplicate content and conducts canonicalization
The developers talked about this in the new episode of the Search Off The Record podcast.
Google employees John Mueller, Martin Splitt, Gary Ilsh, and Lizzie Harvey have elaborated on duplicate content and Google’s canonicalization. We have chosen the most important.
How Google detects duplicate pages
Everything turned out to be quite simple: there is a metric called checksum for each page. This is a unique cipher based on the text of the page. If two pages match the checksums, then Google counts them as duplicates. In practical applications, the checksum is also used to check the integrity of data during transmission.
To calculate the checksum, the main indicator is used – the central element of the page – which includes the main content (except for headers and footers and sidebars), and after calculating it, a cluster of duplicates is created. Of these, Google will choose one, which will appear in the SERP. Thus, the search engine can select not only full duplicates, but also partial ones.
Martin Splitt on Partial Duplicate Detection:“We have several algorithms that detect and ignore the template part of the pages. For example, this is how we exclude navigation from the checksum calculation, remove the footer. We are left with what we call the central element – the central content of the page, something like the very essence of the page.After calculating and comparing checksums, those that are strongly or partially similar to each other, we combine into a duplicate cluster. “
The process of reducing the page to the checksum is necessary to simplify the work: developers simply do not see the point in scanning all pages. It will take more resources with the same result.
How Google selects a canon page
In this podcast, the main difference between duplicates and canonicalization was determined: first, duplicates of pages are determined and grouped together, and then the main one is found – this is canonicalization.
Canonicalization is the process of selecting a home page in a cluster. For an objective selection of the canonical page, Google uses more than 20 signals. The neural network assigns the weight to them. When one signal decreases, the weight of the other increases and vice versa.
Martin Splitt on signals:“Obviously one of them is the content of the page. But there may be other signals: which page has a higher PageRank, on which page protocol (http or https), is the page included in the sitemap, is it redirected to another page, is the rel = canonical attribute set … Each of these signals has your weight, we use machine learning to calculate.After comparing all signals for all pairs of pages, we are approaching the actual definition of canonical. “
Finally, the developers noted that canonicalization has nothing to do with ranking.
Dogpile is an meta search engine for information over the World Wide Web that attracts effects from Google, Yahoo!, Yandex, Bing, along with other popular search engines, such as individuals from audio and video content providers like Yahoo!.
What kind of search engine is dogpile
Dogpile, one of the most popular metasearch engines on the Web, was launched in 1996. It’s now operated by InfoSpace, which recently streamlined its interface, so which makes it a brand fresh look and features. Using advanced metasearch technology,
Dogpile searches the Web via the Internet’s leading search engines (see list below), promising to bring, with one click, the very best results from its joint pool of search engine sources.
(Note: Although it today labels”sponsored links” these are interspersed throughout the results listings and aren’t always easy to spot.
Dogpile also displays result links on the right-hand side of their results page for clustering and refining searches even further.
Thus, the searcher can drill down into narrower subtopics without needing to use complex search programs. For the intrepid researcher, nonetheless,
Dogpile also supplies an Advanced search site.
What is metasearch engine?
A metasearch engine (or search aggregator) is an internet Information retrieval tool which employs the data of a internet search motor to create its own results. Metasearch motors take input from a user and instantly query search engines for outcomes.
History of metasearch engine and Dogpile
The first person to incorporate the notion of meta searching was Daniel Dreilinger of Colorado State University. He developed SearchSavvy, which let users search around 20 different search engines and directories simultaneously.
Though quickly, the search engine was limited to simple searches and so was not dependable.
University of Washington pupil Eric Selberg published a more”updated” version called MetaCrawler.
This search engine improved on SearchSavvy’s accuracy with the addition of its search syntax behind the scenes, and also matching the syntax to that of the search engines it was probing.
Metacrawler reduced the amount of search engines queried to 6, but although it produced more accurate results, it still wasn’t considered as accurate as searching a query within a single engine.