WHAT YOU WILL LEARN
In this lesson, we will discuss the history of Google’s page rank algorithm updates. We will see that the only lasting strategy to good search engine placement is to create quality sites with interesting content. The many tricks and devices touted by so many over the years have often backfired. The only secret to a good Web presence is to concentrate on creating good quality content and a comfortable user experience.
SOME GENERAL HISTORY
Leland Stanford, Jr., the son of a California governor, died of typhoid fever two months before his 16th birthday. In 1891, his father and mother founded a private research university in his memory. Stanford University, built in what is now known as Silicon Valley near Palo Alto, California, housed one of the original four computers on the ARPNET (the network that became the Internet) and has become one of the most prestigious universities in the world.
In 1938, Milton Sirotta, a nine-year old boy at the time, upon being asked by his uncle, the renowned mathematician Edward Kasner, for a word to describe the number one followed by a hundred zeros, coined the word “Googol.”
At Stanford University in 1995, 21-year-old Sergey Brin received his assignment as orientation advisor for 22-year-old Larry Page. The two young men found little in common upon first meeting. Within a year, however, they were working together on a search engine project called “BackRub.” It was an effective way to search and access the voluminous information that had found its way into the Internet. In 1997, Brin and Page renamed the search engine “Google.”
An investor was found; an employee was hired; and Google, Inc. began with three young men working from a garage to improve and market the Google search engine. Soon after, the company moved into an office. A very large Leonberger dog, named Yoshka, became the official company dog.
The first named update to the Google search ranking algorithm came in 2002. It was called Boston.
In August 2004, Google went public on the stock market. Yoshka remained the company’s top dog; but, being a dog, owned no stock. His humans owned quite a bit, though. The original price of a share of stock was $85. At the time of this writing in August 2013, a share is worth $878. Yoshka had a good life.
Leland, Milton, Sergey, Larry, and Yoshka are all part of the history of what is arguably the most important company in the world today. It controls, to a great extent, what is seen and what is not seen on the Internet. It exercises this control through its ranking algorithms—omputer code that determines what shows up where when someone searches on Google for a particular word or phrase.
HISTORY OF GOOGLE RANKING ALGORITHMS
The next algorithm update after Boston came in April 2003 and was called Cassandra. Cassandra came down hard on massive linking from co-owned domains and hidden text. In April 2003, the Dominic update made Google more picky with the backlinks it counted. In June of 2003, Esmerelda made further changes that no one could really ascertain from the outside. In November 2003, an update called Florida began to penalize keyword stuffing. Austin, in January 2004, expanded Florida to metatags and made other changes to refine identification of deceptive on-page tactics. Brandy in February 2004, began to use semantics and synonyms to refine keyword analysis and incorporated the concept of neighborhoods to better analyze linking patterns. In January 2005, Google gave some control to Webmasters by recognizing the NoIndex/Nofollow attribute.
With the Allegra update in February 2005, the first suspicions arose that Google was penalizing certain linking strategies. In May 2005, Bourbon made some technical changes regarding duplicate content and the importance given to the presence or absence of www in a URL. In June 2005, Google gave some more control to Webmasters by creating XML sitemaps via Webmaster Tools, bypassing traditional HTML sitemaps. In the same month, Google began to personalize search results. No longer was the ranking seen by one the same as the rankings seen by others. In October 2005, Google integrated local maps data into the search index. In October 2005, Jagger created penalties for reciprocal links, link farms, and paid links. In December of 2005, with the Big Daddy update, Google made some changes to how certain technical things (like canonicalization, re-directs, etc.) were treated.
A few other update names floated around, but the next major change came in August 2008, when Google began providing suggestions as you typed your search. In February 2009, the Vince update seemed to help major brands. Also in February 2009, Google provided a great tool that allowed Web developers to clean up their sites organization without losing ranking. (The rel-canonical tag allowed site developers to tell the Google index a better, preferred, new location for the same page, so that the ranking for the old link could be redirected to the new.) In December 2009, Google provided faster and better integration of social media postings into its results. In 2010, Google Places provided more local advertising options and provided greater integration of places pages with local results.
In May 2010, the May Day update had a significant negative impact on long-tail traffic. (Long-tail keywords are search phrases containing several words. With a long search phrase one can be more specific about what is being sought or offered. While appropriately using long keyword phrases is a good thing in the eyes of the search engines, attempting to manipulate them to corner traffic to be switched to other offerings is considered bad—and was penalized by May Day.) In June 2010, Google implemented a new indexing system, called Caffeine, which is able to index an extremely large number of links much faster. In August 2010, the Brand Update allowed specific URLs to appear more often in the results. In September 2010, with Google Instant, Google started displaying results even before the search was submitted, which affected the ultimate results of a search effort, favoring established sites. In November 2010, Google placed emphasis on landing page quality by providing a magnified preview on mouse-over of a search engine result listing.
In December 2010, an e-commerce site located in New York City got a great deal of attention publicly bragging that bad reviews from customers (that it admitted cheating and abusing) upped it ranks in the search engines. To counter, Google immediately reacted with an algorithm update to sort out positive reviews from negative reviews.
In January 2011, Google began with a vengeance to promote good user experience for those using its search engine. A major strategy was to penalize black hat SEO tactics. It studied particular sites that were successfully manipulating search results and incorporated better ways to prevent their tactics from working.
Starting in February 2011, the updates became too frequent and numerous to name them all. For the most part, they started falling under the umbrella name of Panda. The main goal of all the Panda updates is to improve user experience while searching, primarily by preventing low quality, low information sites from being ranked high in the search results. If you have ever searched for local information and been led to national sites that have no local information for your area (which you only discovered after drilling down for several minutes, being exposed to numerous irrelevant ads in the process), you can understand why it was necessary for Google devote so much effort to this task. That negative search experience, which I have unfortunately encountered several times, creates user dissatisfaction with the search engine. To keep its place at the top of the Internet, Google had to provide better results for its users.
Another major focus of the original release of Panda was to eliminate Scrapers from the search results. Scrapers are sites that republish the content of other sites. There are degrees of scraping. Some sites just flat out steal copyrighted content and republish it as their own. Other sites republish it with a link to the original content buried at the bottom. Still others, do not republish the content in entirety, but give snippets and then link to the original content. The former mentioned sites are flat out illegal. These latter sites are not necessarily illegal or even bad—if they help to organize information by including several snippets and links to widespread content dealing with the same subject. The latter sites are frustrating to searchers, though, if they just create extra pages and extra ads that have to be negotiated to arrive at the original content sought. Sorting out the useful sites that aggregate information from the manipulative and frustrating sites (that only ultimately serve to hide information behind multiple advertisements) is a daunting task for Google.
Penguin, another update name that showed up late in 2012, carries on with the Panda objectives…with a little less focus on user experience and more focus on discovering black hat practices at work and eliminating their effectiveness. Penguin allows, through specific form pages, for feedback from users when they discover offending sites or if they believe a good site has been wrongly penalized.
Google posted a blog with the first Panda release. The article listed the questions you should ask to determine if your site is a quality site. Their conclusion, as well as their warning to Webmasters, was that you should concentrate entirely on making your site a quality site, rather than concentrating on SEO techniques.
Over two years of comments on that article express the frustration of Webmasters in attempting to deal with that advice. In short, the Webmasters complain that low quality sites that use manipulative techniques still show up high in the rankings, while high quality sites go down in the rankings. They suggest to Google that the way to eliminate black hat SEO is to stop allowing it to be successful. Some Webmasters, no doubt, are frustrated because they have low quality sites. Others, though, with high quality sites, still do not get the ranking they deserve. Frustrated Webmasters from both camps accuse Google of rewarding its partners and those that use Google ads, while paying little real attention to recognizing high quality sites and improving their rankings. They accuse Google of just being a big business whose only interest is making money—an unfair monopoly.
I suspect the truth lies somewhere in between what Google is saying and what its detractors are saying, hopefully more toward what Google is saying. In any event, the takeaway from this long history of ranking algorithms is that most entrepreneurs and small businesses are really not positioned to have the knowledge and the skill to game the Google results in the long run. Some of the shortcuts and manipulations that have been posted throughout the years may work for a while, but none of them work in the long run. The advantage of any highly touted manipulative SEO strategy can be wiped out at any time, without notice, by any new Google update. Plus, you can get penalized at any time for having used it.
In the many lessons written over the many years of this course, I have, for the most part, steered away from detailed discussion of manipulative SEO tricks and tactics. At times, I worried when I saw so many other Internet marketing writers dealing with it in such detail; but from the present vantage point, I am glad that I did not follow suit. Instead, I have concentrated on the general principle that you should strive to create a site that is original, informative, and easy to use. That is what Google now insists is the only thing with which you should be concerned.
There are, of course, technical things that you need to know even when taking that approach. Those technical things change over time. As stated often in this course, the Internet changes constantly. Technology changes. Technical issues come and go. Business and society change. Historical events trigger a direction, but in the evolution of the journey, change is constant. In our next and final lesson for this course, we will identify some of the few lasting principles that have not changed—emergent principles that arise from nature and direct the course of inevitable change, while remaining constant and true to themselves. From Leland Stanford, Jr.’s untimely death to modern day e-commerce a picture is painted—unchanging principles directing a changing technology. Understanding these principles builds a foundation that can withstand the constant change in methods and specifics.
This lesson has been about one of those principles: Content is King.
WHAT’S COMING NEXT
In our next and final lesson for this course, we will identify some of the lasting principles that have not changed in Internet Marketing over the years this course has been written.
by George Little
Copyright 2013, Panhandle On-Line, Inc.
License granted to Carson Services, Inc. for distribution to SFI affiliates. No part of this work may be republished, redistributed, or sold without written permission of the author.