This is Part 2 in a series of 3 articles about the Past, Recent Past and Present of content writing. Part 1 can be found at Writing For Readers: The Beginning
As the number of websites on the World Wide Web grew, finding a particular page or site became increasingly difficult. The first website CERN and other early web pages maintained directory listings of web pages usually under the heading of “What’s New” or as an archive but were not searchable. Subsequent online tools that followed gradually incorporated crawling, indexing and searching contents of specific web pages, ushering the rise of search engines. Among the most popular first search engines in the 1990’s were Lycos, Magellan, Excite, Infoseek and Yahoo.
Crawling and indexing web pages
By the mid-90s, webmasters and site owners started optimizing their web pages for search engines such as Yahoo and Google, by submitting the URL or website address to the search engines for indexing. Upon submission, search engines used crawling programs called “spiders” or “bots” to visit the web page and extracting links to other pages and other data for indexing. The crawlers then downloaded the web page and saved it in the search engine’s server where an indexing program obtained additional information about the page, particularly words and links appearing on them.
The rise of Google
Google first entered the search engine scene by offering online surfers a quick and easy way to search for any topic. All that users had to do was type a few phrases or terms and in less than a minute, Google would present a list of websites with matching phrases also known as “keywords”. Google gained popularity among online users due to its Page Rank algorithm which ranked websites according to the page rank and number of pages that linked to them. This algorithm is based on the idea that high quality pages had more links than others. Google’s share of the search market soared, becoming a household name and even a verb.
Value of search rankings
Website owners began to understand that it was not enough to have a website. Their web pages must also be visible to search engines and obtain top search rankings. This led to the practice of search engine optimization (SEO) which involved white hat, black hat and grey hat techniques that were designed primarily to attract search engine spiders.
Black hat SEO
The early search algorithms relied heavily on on-page factors such as the meta data and keyword density found on a web page. Because webmasters had control over on-page elements of a website, it was easy to abuse the search system by manipulating elements in the page’s HTML code. Here are some examples of black hat SEO practices:
- Doorway pages
Google defines doorway pages as “large sets of poor-quality pages where each page is optimized for a specific keyword or phrase. In many cases, doorway pages are written to rank for a particular phrase and then funnel users to a single destination.” (https://support.google.com/webmasters/answer/2721311?hl=en) Examples of doorway pages include:
- Multiple domain names targeting specific geographical areas or cities, that redirect users to one web page
- Cookie-cutter web pages created for the sole purpose of affiliate linking
- Duplicate or similar content on multiple pages of a website, aimed at improving the site’s search rankings for specific keywords or geographical names.
- Keyword stuffing
In this technique, keywords are used repeatedly in web content often making no sense to the audience. Just like doorway pages, web pages that are overstuffed with keywords tend to frustrate their readers while managing to outsmart the search spiders, albeit temporarily.
- Duplicate content
Instead of creating their own content, some websites published copies of quality content found in other websites and made them their own. This caused a lot of duplicate content to appear in search results.
- Hidden text
Some webmasters succeeded in generating site traffic by incorporating keywords that were visible only to the search engine crawlers but not to the audience. They hid text by blending it with the colour of the page background or by layering over it.
- Title sacking
Similar to keyword stuffing, this practice involves giving a single web page several titles in order to gain more traffic.
- Automating content creation
As a means to quickly filling up large quantities of web pages, some site owners and webmasters used programs that could spin or rewrite text content through automating the use of synonyms and similar activities. This resulted in the appearance of a lot of poorly written content in search rankings, devaluing search results. (Google Support)
- Link farming
Some websites also resorted to link schemes aimed at getting numerous links from other websites, often low quality, irrelevant and spam-infested content. Such link farming schemes included buying and selling links and excessive link exchanges. (Google Support)
Search engines such as Google dislike black hat SEO techniques because they can affect the quality of their search results and disappoint online users. While Google’s search algorithm is kept confidential, it has published a set of guidelines that can help webmasters and site owners improve their search rankings organically or naturally, without the deceptive and manipulative tricks of black hat SEO. (Google Support)
So the bad old black hat days are behind us (for now). What is the ‘Now’ of content marketing best-practice – see Part 3.
Leave a Reply