Early search
engines held an index of a few hundred thousand documents and pages, and
received maybe one or two thousand inquiries every day. Nowadays, a top search
engine will index hundreds of millions of pages, and reply to tens of millions
of queries for every day. In this piece of writing, we'll inform you how these main
tasks are performed, and how Internet search engines put the pieces mutually in
order to let you get the information you need on the Web.
Search
engines execute several activities in order to convey search results
- Crawling - is a method of fetching all web pages that linked to a web site. This task is performing by the software, called a crawler or a spider (or Googlebot, since is the case with Google).
- Indexing - is the procedure of creating index for all the fetched web pages and maintaining them into a gigantic database from where it can later be regained. Basically, the progression of indexing is identifying the words and expressions that best illustrate the page and assigning the page to particular keywords.
- Processing - When a search demand comes, the search engine procedure s it. I.e. it compares the search string in the search appeal with the indexed pages in the database.
- Calculating Relevancy - As it is expected that more than one pages surrounds the search string, so the search engine set up calculating the relevancy of each of the pages in its index to the search string.
- Retrieving Results – In the last step in search engines' actions is retrieving the greatest matched results. Fundamentally, it is nothing more than only displaying them in the browser.
Search engines for instance Google and Yahoo!
often modernize their relevancy algorithm dozens of times per month. When you
see changes in your rankings it is because of an algorithmic change or
something else outside of your control.
Although the basic standard of process of all search
engines is the similar, the slight differences between their relevancy algorithms
direct to most important changes in results relevancy.