Google Referencing Website

By | August 25, 2022

Google Referencing Website

The term “referencing” refers to all the techniques used to improve the visibility of a website, i.e. the positioning of a site’s pages in a good position in the results pages for certain keywords.

The difficulty of the exercise lies not so much in promoting the site to search engines as in structuring the content and the internal and external networking to provide a good experience for the user and make the page progress in the results on previously chosen keywords.

Indeed, a majority of Internet users use search engines to find information and therefore query a search engine using keywords. It is therefore essential above all to be concerned about the content you want to offer in order to best meet the expectations of Internet users and identify the keywords they are likely to enter!


The term SERP (Search Engine Result Pages) refers to the search results as displayed after a query. It is essential to understand that from one user to another the results for the same search engine can vary, on the one hand according to the settings chosen by the user (language, number of results per page) but also according to the location (country, region, city) from which the query is made or the terminal (mobile, tablet, desktop computer) or sometimes according to the queries previously made by the user (search history) and finally because search engines regularly do A/B testing to test different displays. As such, it is not uncommon that a site disappears from the SERPs on a request for 24 to 48 hours, then reappears. This means that it is necessary to wait at least 72 hours before worrying.

It’s not because you see yourself in first position that you are necessarily in first position. To get a result as close as possible to what the majority of users see, it is advisable to disable the request history, or even to navigate using the private navigation of your browser.

The pages referenced in first position obviously get more visits, then come the pages in second position, etc. It is the same for pages referenced in first position compared to pages referenced in second position. Thus, if a page is in 11th position (thus in second page), it is very interesting to try to optimize it in order to move it to the first page and obtain a significant gain of unique visitors.


SEO only makes sense in relation to keywords, i.e. the words (queries) used by visitors to search the web. The first work consists in determining the keywords on which we wish to position the pages of our site. The keywords you have in mind do not always correspond to the keywords used by visitors, because they tend to use the shortest possible terms or make spelling mistakes. There are tools that allow you to compare the search volume of one keyword against another and give suggestions :

Finally, there are online services allowing to know the keywords of competing sites such as

SEO Black hat / White hat

When it comes to natural referencing, two schools of thought are generally opposed:

  • The SEO White hat, designating SEOs who scrupulously respect the instructions of search engines to webmasters, in the hope of obtaining a sustainable SEO over time by playing by the rules of the game;
  • Black hat SEO, designating SEOs adopting techniques contrary to the instructions of search engines, with the aim of obtaining a quick win on pages with a high potential for monetization, but with a high risk of downgrading. Black hat SEOs play cat and mouse with search engines, which regularly adapt their algorithms to identify and downgrade sites that do not comply with the guidelines.
  • Grey hat SEO refers by extension to the so-called borderline techniques, i.e. whose use is subject to controversy in communities dedicated to natural referencing without being explicitly prohibited by search engines in their guidelines.

Is my site referenced?

To find out if your site is referenced in a search engine, simply enter the following command in the search field:

If your site is in the index, the engine should display a selection of pages it knows about on the site and a number of results representing approximately the number of pages it knows about your site.

Refer a site

Before talking about SEO, the first step is to make sure that the main search engines and in particular Google (because it is the most used) identify the site and come to browse it regularly. As such, “referencing your site” does not mean anything special, except that the pages of the site that you want to see come up are present in the index of a search engine. To do this :

  • either to obtain links from sites which are themselves regularly indexed by search engines, so that the latter can identify the existence of yours.
  • or to declare your site (or the pages of your site) via the interface of the main search engines.
  • or to declare a file called sitemap, containing the list of URLs of the pages to be indexed, directly via the interface of the main search engines, in order to help them to find the new pages to index.

Adding your site in search engines

For this purpose, there are online forms to submit your website. Don’t hesitate to set up a web analytics solution such as Google Analytics or AT Internet ( ) which will give you information on the origin of your visitors and the pages visited, as well as a lot of other useful information.

Google referencing

Google is the main search engine in the world  with a 90% market share. The tool to manage the referencing of your site in Google is called Google Search Console (formerly Google Webmaster Tools). You just have to connect to it and add your, then declare your sitemap:

Bing Referencing

The referencing in Bing also requires the use of tools for Webmasters. Simply create an account and follow the procedure on the following page:

Yahoo Referencing

Now Yahoo relies on Bing for its search engine. The following page explains how to submit new URLs:

Qwant referencing

To submit your site on Qwant, just use the following page: reference -my-website-on-qwant/

Free referencing

Referencing is not necessarily paid because search engines index the content of sites for free and it is not possible to pay them in order to better position your site.

It is enough that other sites point towards yours for the engines to visit it and the more the links are of good quality (i.e. on sites with a good reputation), the better your site will appear in the search engines on the terms corresponding to those of your site. However, the methods to be implemented are numerous and sometimes complex and a simple mistake can have significant repercussions, which is why many companies call upon SEO professionals to advise or even assist them.

Paid referencing

On the other hand, it is possible to buy keywords on search engines, it is then about advertising sites (called sponsored links), located around the so-called natural search results. This is called SEM (Search Engine Marketing) as opposed to SEO (Search Engine Optimization).

On the other hand, as SEO is a vast concept, requiring a lot of experience and with many hidden difficulties, it may be advisable to call upon agencies specialized in SEO who will be able to advise and support them.

Specialized agencies can help you improve the positioning of your site in search results. They can sometimes offer to create or update the content of the site. However, be wary of offers such as “Referencing in more than 200 search engines” or “Referencing in more than 1000 directories” or “Guaranteed referencing” / “First place in a few days”. A natural referencing must remain natural, i.e. it must be progressive.

Beware of automatic referencing software. Some search engines will simply reject your site (in most cases you must leave your email address with the SEO form to fill out). In extreme cases, the use of such software, if it massively submits pages of your site in a large number of directories can be counter-productive and lead some engines to ban your site.

Optimize referencing

The reference element for search engines is the web page, so when designing the website, think about structuring the pages taking into account the above advice for each of the pages.

Indeed most webmasters think to index correctly the home page of their site but neglect the other pages, but it is usually the other pages that contain the most interesting content. It is therefore imperative to choose a title, a URL and metas (etc.) for each page of the site.

There are a few techniques of site design that can give more efficiency to the referencing of the pages of a site:

  • an original and attractive content,
  • a well-chosen title,
  • a suitable URL,
  • an engine readable body of text,
  • META tags that precisely describe the content of the page,
  • well-thought-out connections,
  • ALT attributes to describe the content of images.

Web page content

Search engines seek above all to provide a quality service to their users by giving them the most relevant results according to their search so before even thinking about improving SEO it is essential to focus on creating consistent and original content.

Original content does not mean content that is not offered by any other site, it would be an impossible mission. On the other hand, it is possible to treat a subject and to bring an added value to it by deepening certain points, by organizing it in an original way or by linking different information. Social networks are therefore an excellent vehicle for promoting content and for identifying the interest that readers have in your content.

On the other hand, always with a view to providing the best content to visitors, search engines attach importance to keeping information up to date. The fact of updating the pages of the site thus makes it possible to increase the index granted by the engine to the site or in any case the frequency of passage of the indexing robot.

Title of the page

The title is the element of predilection to describe in few words the content of the page, it is in particular the first element that the visitor will read in the result page of the search engine, it is thus essential to give it a particular importance. The title of a web page is described in the header of the web page between the tags <TITLE> and </TITLE>.

The title should describe as precisely as possible, in 6 or 7 words maximum, the content of the web page and its recommended total length should ideally not exceed sixty characters. Finally, it should ideally be as unique as possible in the site so that the page is not considered duplicate content.

The title is all the more important as it is the information that will be displayed in search results, in the user’s favourites, in the title bar and browser tabs, and in the history.

Since Dollarpean users read from left to right, it is advisable to put the words with the most meaning of the page to the left. In particular, make sure that each page on your site has a unique title, including pages with pagination. In the latter case, you can, for example, make sure that pages beyond page 1 have the page number in the title.

Page URL

Some search engines attach great importance to the keywords present in the URL, especially the keywords present in the domain name. It is therefore advisable to put a suitable file name, containing one or two keywords, for each of the files on the site rather than names such as page1.html, page2.html, etc.

Body of the page

In order to make the most of the content of each page it is necessary that it is transparent (as opposed to opaque content such as flash), i.e. that it contains a maximum amount of text, indexable by the engines. The content of the page must above all be quality content addressed to visitors, but it can be improved by ensuring that different keywords are present.

Frames are strongly discouraged because they sometimes prevent the indexing of the site in good conditions.

META tags

META Tags are non-displayed tags to be inserted at the beginning of an HTML document in order to finely describe the document. Given the misuse of metas found in a large number of websites, engines use less and less this information when indexing pages. The meta tag  has been abandoned by Search engine.

META description

The meta description tag allows you to add a description describing the page, without displaying them to visitors (e.g. terms in the plural, or even with deliberate spelling mistakes). It is usually this description (or part of it) that will be displayed in the SERP. It is advisable to use HTML coding for accented characters and not to exceed about 20 keywords. Note that its presence is not mandatory and that in case of omission, the search engine will decide by itself the description to display according to the texts present in the page. Also note that it is crucial that each page of the site has a different description, at the risk of being considered duplicate content.

META robots

The meta robots tag is particularly important because it describes the behavior of the robot vis-à-vis the page, including whether the page should be indexed or not and whether the robot is allowed to follow links.

By default the absence of robots tag indicates that the robot can index the page and follow the links it contains.

They can take the following values :

  • index, follow: this instruction is equivalent to not putting a robots tag since it is the default behavior.
  • noindex, follow : the robot must not index the page (however the robot can come back regularly to see if there are new links)
  • index, nofollow : the robot must not follow the links of the page (on the other hand the robot can index the page)
  • noindex, nofollow: the robot must no longer index the page, nor follow the links. This will result in a drastic decrease in the frequency of page visits by the robots.

Here is an example of a robots tag :

<meta name=”robots” content=”noindex,nofollow”/>

Also note the existence of the following values, which can be cumulated with the previous values:

  • noarchive: the robot must not offer the cached version to users (especially for the Google cache).
  • noodp: the robot must not propose the description of DMOZ (Open Directory Project) by default.

It is possible to specifically target Google’s crawlers (Googlebot) by replacing the name robots by Googlebot (it is however advisable to use the standard tag to remain generic):

<meta name=”googlebot” content=”noindex,nofollow”/>

If a large number of pages should not be indexed by search engines, it is preferable to block them via robots.txt because in this case the crawlers do not waste time crawling these pages and can thus concentrate all their energy on the useful pages.

On the forums, for example, unanswered questions are excluded from the search engines, but the search engines can continue to browse the pages to follow the links :

<meta name=”robots” content=”noindex,follow”/>

After a month, if the question still has no answer, the meta tag becomes the following one, so that the engine forgets it :

<meta name=”robots” content=”noindex,nofollow”/>

Internal links

In order to give maximum visibility to each of your pages, it is advisable to establish internal links between your pages to allow crawlers to browse through your entire tree structure. Thus it can be interesting to create a page presenting the architecture of your site and containing pointers to each of your pages.

This means by extension that the site navigation (main menu) must be designed to effectively give access to pages with high potential in terms of SEO.


The term backLinks refers to the fact of obtaining external links pointing to your website because it increases on the one hand the traffic and the notoriety of your site, on the other hand because search engines take into account the number and quality of links pointing to a site to characterize its level of relevance (this is the case of Google with its index called PageRank).

Nofollow links

The links are by default followed by search engines (in the absence of META robots nofollow or a robots.txt file preventing the indexing of the page). However, it is possible to tell search engines not to follow certain links by using the nofollow attribute.

This is especially recommended if :

  • The link is the subject of a commercial agreement (paid links)
  • The link is added by unsafe users in contributory areas of the site (comments, opinions, forums, etc.).

ALT attributes of images

The images on the site are opaque to search engines, i.e. they are not able to index their content, so it is advisable to put an ALT attribute on each image, allowing to describe its content. The ALT attribute is also essential for blind people, navigating with the help of Braille terminals.

Here is an example of an ALT attribute:

<img src=”images/yourdomainename.gif” width=”140″ height=”40″ border=”0″ alt=”logo of yourdomainename “>

It is also advisable to fill in a title attribute to display a tooltip to the user describing the image.

Improving the crawl

SEO starts with the crawl of your site by the search engine robots. These are agents browsing sites in search of new pages to be indexed or pages to be updated. An indexing robot acts as a kind of virtual visitor: it follows the links on your site in order to explore as many pages as possible. These robots are identifiable in the logs by the HTTP User-Agent header they send. Here are the user-agents of the main search engines:

Googlebot, etc.

Below are examples of User-Agent strings for the most popular search engines:

For example, it is important to make sure that your pages are intelligently linked so that the robots can access as many pages as possible, as quickly as possible.

To improve the indexing of your site, there are several methods:


It is possible and desirable to block pages that are useless for referencing with a robots.txt file to allow indexing robots to devote all their energy to useful pages. Duplicate pages (for example, pages with parameters that are not useful to robots) or pages that are of little interest to visitors from a search (internal site search results, etc.) should typically be blocked.

Page loading speed

It is important to improve the loading time of pages, for example by using caching mechanisms, because this improves the user experience and therefore visitor satisfaction, and because search engines are increasingly taking these types of signals into account in the positioning of pages;


By creating a sitemap file, you can give the robots access to all your pages or the last indexed pages. It is a file in XML format containing the list of pages to be indexed. Note that the address of your sitemap file can be declared in the robots.txt file, by a line beginning with


followed by the URL of your Sitemap file.

Social networks

More and more search engines are taking into account social sharing signals in their algorithms. Google Panda takes this criterion into account in particular to determine whether a site is of quality or not. In other words, promoting social sharing limits the risk of impact by algorithms such as Panda.

Referencing a mobile site

The ideal is to have a mobile site designed in responsive design because, in this case, the indexed page for desktops and mobile terminals is the same, only its display changes depending on the display device.

If your mobile website is on a separate domain or sub-domain, simply redirect users to the mobile site automatically, making sure that each redirected page points to its equivalent on the mobile site. You also need to make sure that the Googlebot-Mobile crawler is treated like a mobile device!

Google has indicated that mobile-friendly pages have an SEO boost on non-mobile-friendly pages in mobile search results. This boost is applied page by page and is re-evaluated over time for each page, depending on whether or not it passes the test.

Duplicated content

As far as possible, the aim is to create unique page titles throughout the site, as search engines such as Google tend to ignore duplicate content, i.e. either many pages on the site with the same title or pages on the site whose main content exists on the site or third party sites.

Duplicated content is something natural, if only by the fact that we are led to make quotations, to report comments by personalities or to make reference to official texts. However, too much duplicated content on a site can lead to an algorithmic penalty, so it is advisable to block such content using a robots.txt file or a robots META tag with the value “noindex”.

Canonical tag

When search engines detect duplicate content, they keep only one page, according to their own algorithms, which can sometimes lead to errors. Thus, it is advisable to include in pages with duplicate content a Canonical tag pointing to the page to keep. Here is the syntax:

<link rel=”canonical” href=”[http://votresite/pagefinale]”/>

In general, it is advisable to include in your pages a canonical tag with the URL of the current page. This allows you to limit the loss linked to useless parameters in the URL


There are generally two types of penalties:

  • Manual penalties, i.e. resulting from human action, following a failure to comply with instructions to webmasters. They can be unnatural links (purchased links), artificial content, misleading redirections, etc… Penalties for link purchases are common and penalize the site that sold links as well as those that bought them. These penalties can only be waived after correcting the problem (which requires identification of the problem) and a request for reconsideration of the site via the dedicated form. The re-examination of a website can take several weeks and does not necessarily lead to a recovery of the position or sometimes partial recovery ;
  • Algorithmic penalties, i.e. not resulting from any human action, generally linked to a set of factors that only the search engine knows about. This is the case, for example, of Google panda, Google’s algorithm downgrading so-called poor quality sites, or Google Penguin, an algorithm targeting bad SEO practices. These penalties can only be waived once the “signals” leading to a downgrading have been eliminated at the next iteration of the algorithm.

Google Algorithm

Google’s algorithm is the set of instructions allowing Google to give a results page following a query.


Originally the algorithm was based solely on the study of links between web pages and was based on an index assigned to each page and named PageRank (PR). The principle is simple: the more incoming links a page has, the more its PageRank increases. The more a page has PageRank, the more it distributes to its outgoing links. Generally speaking, links are all the more important as they are clicked by a large number of users.

Optimizations of the algorithm

From the PageRank, the algorithm takes into account a large number of additional signals, including (but not limited to) :

  • the freshness of the information;
  • the mention of the author;
  • time spent and degree of involvement;
  • traffic sources other than SEO
  • and so on

Google announces that it carries out about 500 optimizations of the algorithm per year, i.e. more than one modification per day. As a result, the SERPs can vary significantly depending on the changes made by Google’s teams.

Google Panda

Panda is the name given to the filter deployed by Google to fight against poor quality sites. The principle is to degrade the positioning of sites whose content is deemed too low quality. To avoid seeing its site algorithmically penalized by Panda, there are lists of good SEO practices for Google Panda.

Google Penguin

Google Penguin is an update from Google that penalizes sites whose SEO optimization is deemed excessive. This is the case, for example, of sites with too many links coming from sites deemed “spamming”. It would also seem that an abuse of links between pages talking about disparate topics is a factor that can lead to a penalty via the Google Penguin algorithm. Google has thus set up a form to disavow links that could potentially harm the referencing of a site.

Leave a Reply