Cannabis Ruderalis

Search engine optimization (SEO), a subset of search engine marketing, is the process of improving the volume and quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results. SEO can target contextual search, local search, and industry-specific vertical search engines.

File:Serp.jpg
A typical Search Engine Results Page (SERP)

SEO is marketing by understanding how search algorithms work and what human visitors might search for, to help match those visitors with sites offering what they are interested in finding. Some SEO efforts may involve optimizing a site's coding, presentation, and structure, without making very noticeable changes to human visitors, such as incorporating a clear hierarchical structure to a site, and avoiding or fixing problems that might keep search engine indexing programs from fully spidering a site. Other, more noticeable efforts, involve including unique content on pages that can be easily indexed and extracted from those pages by search engines while also appealing to human visitors.

The term SEO can also refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees of site owners who may perform SEO services in-house. Search engine optimizers often offer SEO as a stand-alone service or as a part of a larger marketing campaign. Because effective SEO can require making changes to the source code of a site, it is often very helpful when incorporated into the initial development and design of a site, leading to the use of the term "Search Engine Friendly" to describe designs, menus, content management systems and shopping carts that can be optimized easily and effectively.

History

Origin: Early search engines

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web.

Initially, all a webmaster needed to do was submit a page, or URI, to the various engines which would send a spider to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[1] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, as well as any and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both "white hat" and "black hat" SEO practitioners. Indeed, by 1996, email spam could be found on Usenet touting SEO services.[2] The earliest known use of the phrase "search engine optimization" was a spam message posted on Usenet on July 26, 1997.[3]

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta-tags provided a guide to each page's content. But indexing pages based upon meta data was found to be less than reliable, because some webmasters abused meta tags by including irrelevant keywords to artificially increase page impressions for their website and to increase their ad revenue. Cost per thousand impressions was at the time the common means of monetizing content websites. Inaccurate, incomplete, and inconsistent meta data in meta tags caused pages to rank for irrelevant searches, and fail to rank for relevant searches.[4] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[5]

By relying so much upon factors exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.

More sophisticated ranking algorithms

An example of a more sophisticated search ranking algorithm, Google's PageRank rates page quality based upon the quantity and importance of incoming links.[6] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

The PageRank algorithm proved very effective, and Google began to be perceived as serving the most relevant search results. On the back of strong word of mouth from programmers, Google became a popular search engine. Off-page factors such as PageRank and hyperlink analysis were considered as well as on-page factors to enable Google to avoid the kind of manipulation seen in search engines focusing primarily upon on-page factors for their rankings.

Although more difficult to game PageRank, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaining PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Thus an online industry spawned focused on selling links designed to improve PageRank and link popularity.

To reduce the impact of link schemes, search engines have developed a wider range of undisclosed off-site factors they use in their algorithms. As a search engine may use hundreds of factors in ranking the listings on its SERPs; the factors themselves and the weight each carries can change continually, and algorithms can differ widely. The four leading contextual search engines, Google, Yahoo, Microsoft and Ask.com, do not disclose the algorithms they use to rank pages. Some SEOs have carried out controlled experiments to gauge the effects of different approaches to search optimization. Based on these experiments, often shared through online forums and blogs. SEO practioners may also study patents held by various search engines to gain insight into the algorithms.

Optimizing for traffic quality

In addition to seeking better rankings, search engine optimization is also concerned with traffic quality. Traffic quality is measured by how often a visitor using a specific keyword phrase leads to a desired conversion action, such as making a purchase, viewing or downloading a certain page, requesting further information, signing up for a newsletter, or taking some other specific action.

By improving the quality of a page's search listings, more searchers may select that page, and those searchers may be more likely to convert. Examples of SEO tactics to improve traffic quality include writing attention-grabbing titles, adding accurate meta descriptions, and choosing a domain and URL that improve the site's branding.

Relationship between SEO and search engines

By 1997 search engines recognized that some webmasters were making efforts to rank well in their search engines, and even manipulating the page rankings in search results. In some early search engines, such as Infoseek, ranking first was as easy as grabbing the source code of the top-ranked page, placing it on your website, and submitting a URL to instantly index and rank that page.[citation needed]

Due to the high value and targeting of search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference named AirWeb[7] was created to discuss bridging the gap and minimizing the sometimes damaging effects of aggressive web content providers.

Some more aggressive site owners and SEOs generate automated sites or employ techniques that eventually get domains banned from the search engines. Many search engine optimization companies, which sell services, employ long-term, low-risk strategies, and most SEO firms that do employ high-risk strategies do so on their own affiliate, lead-generation, or content sites, instead of risking client websites.

Some SEO companies employ aggressive techniques that get their client websites banned from the search results. The Wall Street Journal profiled a company that allegedly used high-risk techniques and failed to disclose those risks to its clients.[8] Wired reported the same company sued a blogger for mentioning that they were banned.[9] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[10]

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. All of the main search engines provide information/guidelines to help with site optimization: Google's, Yahoo!'s, MSN's and Ask.com's. Google has a Sitemaps program[11] to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Yahoo! has Site Explorer that provides a way to submit your URLs for free (like MSN/Google), determine how many pages are in the Yahoo! index and drill down on inlinks to deep pages. Yahoo! has an Ambassador Program[12] and Google has a program for qualifying Google Advertising Professionals.[13]

Getting into search engines' databases

As of 2007 the leading contextual search engines do not require submission. They discover new sites and pages automatically. Google and Yahoo offer submission programs, such as Google Sitemaps, for which an XML type feed can be created and submitted. These programs are designed to assist sites that may have pages that aren't discoverable by automatically following links.[14]

Search engine crawlers may look at a number of different factors when crawling a site, and many pages from a site may not be indexed by the search engines until they gain more PageRank, links or traffic. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled, as well as other importance metrics. Cho et al.[15] described some standards for those decisions as to which pages are visited and sent by a crawler to be included in a search engine's index.

Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click. Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.

Preventing search indexing

To avoid undesirable search listings, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled.

Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches.

Types of SEO

SEO techniques are classified by some into two broad categories: techniques that search engines recommend as part of good design, and those techniques that search engines do not approve of and attempt to minimize the effect of, referred to as spamdexing. Most professional SEO consultants do not offer spamming and spamdexing techniques amongst the services that they provide to clients. Some industry commentators classify these methods, and the practitioners who utilize them, as either "white hat SEO", or "black hat SEO".[16] Many SEO consultants reject the black and white hat dichotomy as a convenient but unfortunate and misleading over-simplification that makes the industry look bad as a whole.

"White hat"

An SEO tactic, technique or method is considered "White hat" if it conforms to the search engines' guidelines and/or involves no deception. As the search engine guidelines[17][18][19][20][21] are not written as a series of rules or commandments, this is an important distinction to note. White Hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.

White Hat advice is generally summed up as creating content for users, not for search engines, and then make that content easily accessible to their spiders, rather than game the system. White hat SEO is in many ways similar to web development that promotes accessibility,[22] although the two are not identical.

Spamdexing / "Black hat"

"Black hat" SEO are methods to try to improve rankings that are disapproved of by the search engines and/or involve deception. This can range from text that is "hidden", either as text colored similar to the background or in an invisible or left of visible div, or by redirecting users from a page that is built for search engines to one that is more human friendly. A method that sends a user to a page that was different from the page the search engined ranked is Black hat as a rule. One well known example is Cloaking, the practice of serving one version of a page to search engine spiders/bots and another version to human visitors.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms or by a manual review of a site.

One infamous example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[23]. Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list. [3] [4]

SEO and marketing

There is a considerable sized body of practitioners of SEO who see search engines as just another visitor to a site, and try to make the site as accessible to those visitors as to any other who would come to the pages. They often see the white hat/black hat dichotomy mentioned above as a false dilemma. The focus of their work is not primarily to rank the highest for certain terms in search engines, but rather to help site owners fulfill the business objectives of their sites. Indeed, ranking well for a few terms among the many possibilities does not guarantee more sales. A successful Internet marketing campaign may drive organic search results to pages, but it also may involve the use of paid advertising on search engines and other pages, building high quality web pages to engage and persuade, addressing technical issues that may keep search engines from crawling and indexing those sites, setting up analytics programs to enable site owners to measure their successes, and making sites accessible and usable.

SEO, as a marketing strategy, can often generate a good return. However, as the search engines are not paid for the traffic they send from organic search, the algorithms used can and do change, there are no guarantees of success, either in the short or long term. Due to this lack of guarantees and certainty, SEO is often compared to traditional Public Relations (PR), with PPC advertising closer to traditional advertising. Increased visitors is analogous to increased foot traffic in retail advertising. Increased traffic may be detrimental to success if the site is not prepared to handle the traffic or visitors are generally dissatisfied with what they find. In either case increased traffic does not guarantee increased sales or success.

Legal precedents

In 2002, SearchKing filed suit in an Oklahoma court against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted an unfair business practice. This may be compared to lawsuits that email spammers have filed against spam-fighters, as in various cases against MAPS and other DNSBLs. In January of 2003, the court pronounced a summary judgment in Google's favor.[24]

In March 2006, KinderStart.com, LLC filed a first amended complaint against Google and also attempted to include potential members of the class of plaintiffs in a class action.[25] The plaintiff's web site was removed from Google's index prior to the lawsuit and the amount of traffic to the site plummeted. On March 16, 2007 the United States District Court dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, Gregory John Yu, requiring him to pay part of Google's legal expenses.[26][27]

References

  1. ^ Finding What People Want: Experiences with the WebCrawler The Second International WWW Conference Chicago, USA, October 17-20, 1994, written by Brian Pinkerton
  2. ^ Example Email from Google Groups
  3. ^ Usenet post mentioning SEO, July 26, 1997
  4. ^ Metacrap: Putting the torch to seven straw-men of the meta-utopia, written by Cory Doctorow, Version 1.3: 26 August 2001
  5. ^ What is a tall poppy among web pages?, Proceedings of the seventh conference on World Wide Web, Brisbane, Australia, 1998, written by Pringle, G., Allison, L., and Dowe, D.
  6. ^ Brin, Sergey and Page, Larry, The Anatomy of a Large-Scale Hypertextual Web Search Engine, Proceedings of the seventh international conference on World Wide Web 7, 1998, Pages: 107-117
  7. ^ AirWeb Adversarial Information Retrieval on the Web, annual conference and workshop for researchers and professionals
  8. ^ Startup Journal (Wall Street Journal), 'Optimize' Rankings At Your Own Risk by David Kesmodel at The Wall Street Journal Online, September 9 2005
  9. ^ Wired Magazine, Legal Showdown in Search Fracas, Sep, 08, 2005, written by Adam L. Penenberg
  10. ^ Cutts, Matt, Confirming a penalty, published on 2006-02-02 at Matt Cuts Blog
  11. ^ Google Web Master Central, formerly known as Google Sitemaps
  12. ^ Ambassador Program by Yahoo! Search Marketing
  13. ^ Google Advertising Professionals, a Program by Google AdWords, Google's Pay-Per-Click Advertising Service
  14. ^ What is a Sitemap file and why should I have one?, Google Webmaster Tools, Accessed March 19, 2007
  15. ^ Efficient crawling through URL ordering by Cho, J., Garcia-Molina, H. , 1998, published at "Proceedings of the seventh conference on World Wide Web", Brisbane, Australia
  16. ^ Goodman, Andrew, SearchEngineWatch, Search Engine Showdown: Black Hats vs. White Hats at SES
  17. ^ Ask.com Editorial Guidelines
  18. ^ Google's Guidelines on SEOs
  19. ^ Google's Guidelines on Site Design
  20. ^ MSN Search Guidelines for successful indexing
  21. ^ Yahoo! Search Content Quality Guidelines
  22. ^ Andy Hagans, A List Apart, High Accessibility Is Effective Search Engine Optimization
  23. ^ Ramping up on international webspam by Matt Cutts, published February 4, 2006, at Matt Cutts Blog
  24. ^ [1] Google replies to SearchKing lawsuit, James Grimmelmann at LawMeme (research.yale.edu), January 09. 2006
  25. ^ [2] (PDF) KinderStart.com, LLC, et al v. Google, Inc., C 06-2057 RS, filed March 17, 2006 in the Northern District of California, San Jose Division.
  26. ^ Order Graning Motion to Dismiss..., KinderStart.com LLC v. Google, Inc., C 06-2057 JF (N.D. Cal. March 16, 2007)
  27. ^ Order Granting in Part and Denying in Part Motion for Sanctions..., KinderStart.com LLC v. Google, Inc., C 06-2057 JF (N.D. Cal. March 16, 2007)


See also

SEO Organizations
Notable SEOs
Search Engine Representatives

Leave a Reply