One of the first steps to understanding Search Engine Optimization is to understand the search engines themselves. If you’re familiar with what you’re up against, then you can have a better understanding of how to be successful.
Think of it in terms of a football, baseball, or any sports teams. If you go into a game without studying your opponent, then you won’t have a gameplan. If you don’t have a gameplan, you probably won’t win. However, if you study your opponent and have a gameplan, you’re far more likely to win.
Now I’m no programmer, so I won’t get into the Y = X + Q divided by Z times X is the estimated totally monthly search blah blah blah you get the picture. What I will try to do is give you a brief overview of the process in which search engines retrieve information, store it, and serve up the results.
Crawling the Web
Each search engine has its own automated web program called a “webcrawler or “web spider”. The main purpose of the web spider is to crawl web pages, read and collect the content, and follow the links (both internal and external).
Basically, they crawl the web from link to link to link (Hence, the process is called webcrawling or spidering). They serve many functions, but they are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches.
Indexing the Web
Once a site has been crawled, the spider collects all the webpages and brings them back to the search engine’s index. The search engine index is a giant database that stores all of the documents that are collected by the spiders.
This index needs to be tightly managed so that requests which must search and sort BILLIONS of documents can be completed in fractions of a second.
Processing Search Queries
Search engines literally receive millions of web search queries every single day. When a search engine receives a query, it retrieves from its index all documents that match the query, and then sorts them by relevance.
In addition to normal search queries, there are several other methods of search called “Operators” that can help a searcher narrow their search. These include:
- Phrase Search (“”)…[“Alexander Bell”].
- Site Search (site:)…[site:www.yoursite.com]
- Excluding Terms (-)…[ jaguar -cars -football -os ]
- Fill in the Blanks (*)…[ Obama voted * on the * bill ]
- Search exactly as is (+)…[+child care]
- The OR operator…[Cleveland Browns 1987 OR 1988]
- more Operators…
Once the search engine has determined which results are a match for the query, it must sort the pages. So, just how do they sort their results?
- Webpagess are sorted according to a number of factors that are determined by the search engine algorithm (a mathematical equation commonly used for sorting).
- The search engine runs calculations on each of the results to determine which is most relevant to the given query.
- Search engines then sort the pages in the Search Engine Results Pages (SERPs) in order from Most to Least Relevant so that users can make a choice about which to select.
And that my friends is how search engines work in Lehman’s terms. Hope this helps and have a good one!