What does it mean to have duplicate content on your website?
It means having identical or very similar texts under the same domain. Practically speaking, all contents that have an equal percentage of words but that are reached by the user through different URLs are defined as duplicates. Duplicate content is a very common mistake and its presence affects and compromises the ranking of your website on search engines.
Usually, you can risk making this mistake when you own a multilingual website or an e-commerce. In the latter case, it happens because you have cards of similar products that, for example, differ from each other only for the color or size. In order not to risk creating duplicate content and therefore being penalised by search engines, it is necessary to perform simple operations that will help you not to make this mistake, even if the content you need to create is very similar to one that you have already published.
In this article we will deal with the topic of duplicate content, describing in which situations occasions a content can be considered duplicated, making a distinction between those duplicated internally on the site and those that are “copied” by others. We will also explain why this is important for SEO and we will describe which are the techniques used by webmasters to correct this type of error and how these can be fixed with Kleecks.
When a content is duplicate ?
As we mentioned, content is duplicated when the same text is reachable through different URLs. So basically duplicate content can be:
1. The elements making up the snippet:
– Title tag
– Meta description
2. The contents of a web page:
– Product sheets
A distinction can be made between content copied from third party websites or content duplicated internally on your site.
In the first case, it can happen when:
– Syndication: that is, contents are made available through RSS feeds;
– Publication of contents identical to those of another website;
– Use of CDNs: CDNs replicate your content to network servers, if no suitable measures are taken they can lead the search engine crawler to read such content as copies of others.
In the second case, however, the most common situations are:
– Presence of HTTP and HTTPS protocols;
– The insertion of parameters in URLs;
– Duplicate content in different pages of the same domain;
– Canonicalisation problems;
– Presence of content in printable versions;
– Coexistence of the version of the website with the form www and that without the www.
Now let’s see what it means to have duplicate content within the website and how search engines react to their presence.
SEO penalties for duplicate content
Duplicate content is also explained in the Google Search Console guide in which, the search engine, explains how to avoid duplicate content on the site web.
There are two main problems related to crawler’s indexing of this content:
1 – I am unable to understand which content to exclude from the search engine results;
2 – they cannot determine which version to place in related search queries.
These two problems lead the site to lose ranking, canceling all the results obtained by your SEO strategy. Losing positions means having less visibility and consequently decreasing organic traffic. It is also disadvantageous in that, in order to avoid the inclusion of duplicate content in the SERP, Google independently chooses which content to show. In addition, duplicate content also worsens your site’s performance related to link equity, in fact instead of having links that all point to your “original” content you could risk that part of them point to duplicate content thus decreasing the authority of your page.
Regarding the penalties of Google, it is clearly explained how these are not accepted by the search engine. When the presence of duplicate content is detected, Google decides whether to place it among the additional results or whether to delete the resource directly from the SERP.
How to detect and correct them
An SEO specialist, given the importance of identifying and correcting duplicate content, can use various tools online to detect their presence on the website. There are tools that allow you to recognise if a content is duplicated internally or if it has been copied, for example, by a competitor.
To find internal duplications instead, you can use tools such as:
• Screaming Frog (SEO spider tool).
• Google Analytics (using the Behavior – Site Content – Landing Pages report).
• Google Search Console webmaster tools.
Once the duplicate contents have been identified, it is possible to manage them in the following ways:
• Rel canonical: inserting this attribute inside the head of an HTML page is used to communicate to the crawler which URL is important to you, without having to delete the one relating to duplicate content;
• Redirect 301: used to direct duplicate content to the original content, in this way the search engine is able to understand which page to place among its results;
• Meta tag robots: using it with the word noindex, follow instruct the crawler to crawl your page but not to index it;
• Search Console reporting: allows you to communicate to Google how you want your website indexed.
How to manage these contents with Kleecks
We have just seen how to identify and correct duplicate content following a traditional method, which, like all activities, requires the use of an SEO specialist who is dedicated to solving the problems encountered.
An alternative that saves you time to devote to your SEO strategy is Kleecks.
The Kleecks crawler will scan, through automatic or manual setting, all the contents of your website and will be able to signal the presence of any duplicate contents. Once the errors are found, you can choose between two options:
1 – have Kleecks create the correct canonical rel and have it inserted directly into the head of the web page, automatically;
2 – directly edit the text of the content by following the suggestions that Kleecks indicates to you. In fact, you can change all parts of the page right with Kleecks and publish them instantly, without having to reload them on the CMS you use. Whatever the CMS.
Duplicate content, we are not afraid of you!
We have seen that duplicate content are errors that are frequently made by those who manage a website. Their presence affects the positioning of your site on search engines and SEO activities, as there are penalties for search engines.
Always remember that to detect them you can use different tools online designed also for the identification of duplicate content and that to correct them you will have to use techniques that allow the search engine to understand how to manage them, such as: redirect 301 of the page, insert the canonical rel attribute, use the robots meta tag or report it on the Google Search Console.
Alternatively, you can rely on Kleecks which is able to simplify this activity and, if you want, after having reported all the duplicate content present on your website, it is able to correct them automatically. Magic eh ?