What are Search Engines (How Search Engines Work: Process)

Chapter 3:

This is the third chapter of our SEO tutorial. Here, we will introduce you to the search engines of today and why they matter so much. From the brief history about search engines, this chapter will explain what actually they are and how they work. We will explain you the whole process which works behind when you type a query in Google or Bing or Yahoo.

This is what you will learning in this chapter:

1. Start of search engines

2. Brief intro of search engines

3. Major search engines of today

4. How does a search engine work

What are Search Engines (How Search Engines Work: Process)

What are search engines

What we call search engines today are web search engines. These web search engines are program or software system which allow web or internet search facility to the users.

These can be called as platforms which are powered by and made up of many different and inter linked program and software which work together or in a process to facilitate searching through the billions of available information on the Internet.

This information is available in the form of web pages consisting of various types of content and in the form of files like PDF, word files made available through the internet.

Start of search engines

Search engines started way back in the 1990s. Back then they were called search engines which were created to work offline as a retrieval system.

Birth of the Internet changed the way these search engines function and the way we see them. When there was no Internet connecting all, there were few networks which were created to collect and build databases of files and pages. These were also called as search program.

Several such search engines were launched during the 1990s, starting with the Archie search engine, which was the first search engine. Archie was a program which downloaded directory listing of files and created a searchable database of names of such files which were located on various FTP (File Transfer Protocol) site.

What we know and call search engines of today are web search engines. These were started with the creations of Aliweb, Jumpstation, Infoseek.

Also, initially search engines were more like directories of web pages, many without using crawlers. Yahoo which was launched in 1994 was also a directory back then. Only in 2002 did it become a search engine.

Top 10 search engines of today

1. Google.com

2. Bing.com

3. Yahoo.com

4. Baidu.com

5. Yandex.com

6. Ask.com

7. DuckDuckGo.com

8. Wolframalpha.com

9. Lycos.com

10. Excite.com

How does a search engine work

The basic function of a search engine is to find you most relevant information from web. How they do this is by creating programs and systems to do the following to display the most relevant search results for your queries:

Crawling — read website content

Indexing — categorize discovered pages

Ranking — rank those pages

Results — display ranked pages by their positions

This is how search engines work.

Briefly about their process

Each of these search engines use spiders or bots to crawl various websites which are accessible. This creates their database of websites. These sites are then indexed according to their content and keywords they are using for their relevant keywords or topics. This creates their relevancy for keywords and queries.

Then they consider their online influence to check things like their backlinks, social presence, content about those websites on the Internet, etc., to value their popularity, acceptability, trust, etc. This makes the base for their authority.

Then for each query or keyword these websites are ranked and displayed on search results taking into account their query and keyword relevancy, and their authority.

Detailed process of Search Engines — enabling search, creating database and the system

The search engines basically work on three aspects. These are the creation and update of database of information; categorization of information which also results in what is displayed on the search pages; and the users or searchers (people searching for information) who search using queries.

This has been the traditional 3 points which drove these search engines. Some time later into their journey, search engines created the all important 4th dimension which is paid search or advertisements.

It is now this fourth dimension or paid search advertisement which, many in the industry, widely believe to be the drive behind most or many of the updates and changes related to what and how results are displayed in the search pages.

Every (almost) search engine has their own search advertising platform and for Google it is Adwords.

As we have understood from the history these search engines have evolved from manual inclusion of web pages and files to a more automated and sophisticated process which is called crawling.

The main or only objective of search engines is to show relevant results for the queries that users place on their platforms. The results depend upon their database of information and correct categorization of that information.

Now the whole process of search engines, which is automated, is covered through:

1. Crawling

2. Indexing

This leads to their well categorized database.

This database further leads to displaying of appropriate and relevant search results for queries.


Google and other search engines use web crawlers to find and retrieve information on various web pages. We also call these crawlers as robots or spiders with reference to search engine and the SEO world. What these crawlers do is constantly find new web pages. It does this in two ways.

Although information discovery is automated, still, along with this search engines advice to also submit new pages. This is only to help them to discover our pages faster.

So crawlers, one, crawl or visit pages that we submit, two, discover new pages when they are linked from the pages that they visit. They track all the links from a page to all other pages.

This way crawlers discovers new websites, new pages, and also all the internal pages of our website.

This is why submitting website to Google and other search engines and internal linking are important. This we will discuss in our chapter about how to make your website searchable.

What crawling does is create the database of websites and web pages for search.

The next step is Indexing.


Indexing gives the real meaning and worth to the database which would otherwise be useless, for search.

As the purpose is search, which could be about anything. The aim is to search find pages which are appropriate and relevant for those millions of types of searches performed everyday.

Hence, the crawler delivers the content of every page. This content with all its details is categorized in the large Search Index as relevant for its respective subjects and topics based on the theme, meaning, headings, sub headings, etc.

It is from here that the results are extracted in the form of list of pages whenever a query is submitted in the search engines.

Provide results - Evaluation and matching

We mentioned above that the goal of search engines is to provide relevant and best results for search queries. If they do not then people may go to other search engines.

In order to do this, every search engine needs to do two things: match each query with the database of pages of content and information; and then display the selected pages with some order including query level relevancy, authenticity of website, depth of content coverage, etc., etc.

This is where the positioning of web pages and their ranking on search results comes. This is where every page aims to provide most relevant and the best content in terms of various parameters and also indicate that to the search engines through the use of keywords.

All this is done through SEO.

This gives rise to the ranking of web pages and websites.