The search engine functions in the same way your library card functions to catalog books inside that labyrinthine store room of information. The only difference is that library cards are manually indexed by human minds. Search engines, on the other hand, are mostly administered by web robots or crawlers who sift through the word content of each and every webpage published on line. They then rank web pages according to:
- URL age
- Relevancy (read: frequency of key word appearance)
- Popularity or the number of human visitors who click on the link
- The number of links which track back or refer to the webpage
In layman’s terms, web pages are ranked according to their seniority in the web space, how strongly they assert the topics on query, the human “trust” based on how many people were convinced of the web page’s credibility, and referrals. Unfortunately, there’s a loophole in the system, and this has to do with the “relevancy” if the web pages. Once the SEO “specialists” got a hold of the search engine ranking principles, a whole lot of them decided it’s time to use “hot” keywords as aggressively as possible in the text content of web pages.
For a time, this worked, but then it also caused the decline of people’s trust in search engines. The trust rating of human visitors on search engines dropped by at least 20% from 2006 to 2008, indicating that they no longer feel that everything published online is credible. Today, web publishers are trying to get this trust back by employing “natural” or “organic” SEO without compromising the quality of the articles. There’s a shift in focus from machine to human. The articles aren’t just readable and keyword-rich. Most of the time, the keyword is forgotten.
You’ll see a lot of websites using textual content that’s close to the quality of articles you’ll see in newspapers. For media men, this sounds promising. If this keeps up, maybe search engines will fulfill its original purpose, and that is to organize the jungle that is the World Wide Web.