Ecommerce SEO: Black Hat Tactics That Are Not Black Hat

On the heels of our last post on SEO spam, which mentioned classic black hat SEO tricks like cloaking, doorway pages and link farms, today’s post will look at common ecommerce practices that are similar to black hat techniques but are are not against Google’s quality guidelines, provided you play by Google’s rules.


Cloaking refers to showing search engine bots different content than you show visitors. Bots can be identified through IP detection, and are served keyword-stuffed text. Cloaking may also include redirects, as in BMW’s case, a tactic that got the site banned from Google search.

But the concept of cloaking is not always taboo, consider the cases of IP delivery / geolocation and paywalled content.

IP delivery and geolocation tools either serve up location based content (by language or region) or redirect users to a localized site targeted to the user’s country based on IP address or region, respectively. Google’s position is, so long as you serve Googlebot the same content as you would any other visitor from that IP address or region, IP delivery and geolocation is not considered cloaking.

In the case of digital content publishers who sell piece-meal or by subscription, publishers can feed full-text content to Google and restrict access to visitors provided they allow the “first click free.” This means a visitor can view one page or one multi-page article on a paywalled site before being asked to login/register or purchase the content/subscription. Allowing Google to index full text is a powerful way to attract new customers for paid content.

Paid Links

After last post, there may be some confusion about paid links. Paid links are not spam, so long as they are appended with the “nofollow” attribute. This indicates that the link is advertising, rather than organic, editorial links from the content publisher. Google can exclude such links from its PageRank calculation, and the publisher does not risk a loss of its own PageRank for the crime of schilling content.

What really prompted the search engines to crack down on paid links 3 years ago was the popularity of services like PayPerPost which served as a marketplace for content publishers and advertisers. For about $10-$100 (depending on the blog’s authority), you could get your business or product reviewed with the anchor text of your choice. The blogger was required to to disclose the sponsorship somewhere in the post, but the nofollow attribute was not required on links. Once search engines agreed to crack down on this activity, thousands of blogs saw a drop in precious PageRank by association with the service.

While it’s true nofollowed links will not benefit SEO, the increased branding and traffic you receive from sponsored posts and other paid links could lead to natural links that do count (including social links), and more importantly – more business – provided you target the right media.

Autogenerated Keyword Pages

Some black hatters use a script or software program to create thousands of keyword-targeted (and keyword stuffed) pages with very little “real” content, designed to rank highly rather than to provide value for visitors. Naturally, Google frowns upon this spammy technique.

But there are reasons to legitimately create a keyword-targeted page that can rank well in search engines. For example, the Bizrate shopping engine automatically creates custom search pages based on on-site searches from real customers. These pages are linked through “recent searches” or “related searches” tags throughout the site so they can be crawled, indexed and rank for long tail product searches.

Some may consider auto-generated category pages to be “gray hat” – pushing the boundaries of what’s considered spam. Google recommends using robots.txt to “prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines” (emphasis mine). Personally, I believe these pages do add value. They direct searchers to the most relevant category page for what the searcher is looking for, and deserve to be in the search index. It only becomes spam if pages are allowed to be generated when they contain zero results.

Duplicate Content Across Domains

“Mirror sites” are identical or near-identical copies of a web site created with the intention to flood the search engine results pages and/or interlink between each other. This practices is undoubtedly black hat spam. But what about when you operate legitimate online stores that are localized for different English-speaking countries that share most or all of the same products and descriptions? Are these considered spam? Do you need to write separate product descriptions for each country-targeted domain/subdomain/subfolder?

While it’s wise to take culture into account and adjust your category, product names and descriptions accordingly (e.g. “crockpot” in the US and “slow cooker” in the rest of the world), you’re in the clear with localized websites, provided you geotarget them in Webmaster Tools. This also helps Google return the right TLD (top level domain) in localized search – i.e. your .com pages won’t rank in

While you always want to steer clear of anything that will violate Google’s Quality Guidelines, don’t fear IP delivery, geolocation, paid links (with disclosure and nofollow attributes), search-based dynamic category creation or localized web sites – provided you do so within Google’s recommendations, and consider what is best for your customers.

Related Articles

15 Responses to “Ecommerce SEO: Black Hat Tactics That Are Not Black Hat”

  1. Much of the debate about cloaking is about semantics. For example, like Google I reserve the term “cloaking” to mean the use of IP delivery and similar technology to attempt to deceive search engine bots – therefore, using this definition, cloaking is always spam.

    As for paid links: I’m against Google’s philosophy on this one, but with them practically. Philosophically, there are plenty of good reasons to buy/sell links (even ignoring search engines), and if the link vendor does not know about Google’s desired use of the rel=nofollow attribute then how are they supposed to use it? Practically, it’s Google’s engine, and if Google want to specify what they do and don’t like, then that’s unlucky on innocent link vendors – Google seems to feel that such a rule is required to help them protect their algorithm and that the majority of link vendors are not innocent but trying to subvert that algorithm.

    Fully with you on auto-generated content.

    As for duplicate content, Google themselves have said that the duplicate content penalty is a myth. See:

  2. While I am against link farms and paying for poor quality links…I wonder how Google could detect “paid links” if it were done correctly.

    For example what would happen if I operate several high quality sites/blogs and sell advertising? Advertising for example could be an article written about your industry, linking your site with good anchor text. Isn’t that essentially what magazines are doing…selling promotional ad space? First, how could Google ever detect this as a “paid link”(IMO they can’t – as long as the integrity of the site is kept – not always linking and farming out the PR). And is there really anything wrong with doing it this way?

    Perhaps the definition of “paid” links is different than I’m thinking in this case.

    • Google definitely can’t police it all. What likely helps them detect is looking for “patterns” across the web. They have a history of every link – it’s URL, anchor text, first appearance on a page, etc. If a site gets a large number of anchor text rich links in a short period of time, there may be a flag to investigate, as this doesn’t usually happen organically. They may also look for spammy anchor text like “best widget XYZ” appearing in several places.

      With PayPerPost, it was fairly easy. PayPerPost’s marketplace listed involved blogs. Some sites advertise that you can buy links. Another possibility is, upon manual review, a Google employee could make a fake paid link request and see if the webmaster bites.

      • Linda – agreed. Google has it’s methods to try and police it and flag interesting activity. That’s a lot of time to really police it all…and time is money for Google.

        Wasn’t it interesting that it took a New York Times reporter to kick Google into action over JCPenny? Does anyone else find that interesting? Seems they would have caught this on their own if they were really proactive about it…. I think it shows how much they really police…granted they will bust you when they catch you. Lucky for JCPenny they can blame it on an SEO company and say they’re sorry, probably get back to where they should have been fairly quickly.

        Just to clarify – I am in no way encouraging this sort of spammy link building. I even avoid working with companies that want to improve their SEO in this way. I find companies that have things worth talking about, and promote them with sound SEO principles. I just find it interesting is all.

  3. Vinay says:

    How do you think the canonical url issue be looked at, is it a case of duplicate content or black hat SEO?

    • I think the canonical URL certainly has its place when you have content duplicated on one domain (say, a product in 3 different categories, which may have the categories baked into each URL string – canonical could be a product page stripped of the category). Or, if you’ve syndicated articles around the web, have them link back to you so Google can determine yours is the “canonical” version.

      With black hat spam pages – I don’t think they’re concerned about canonicalization or user experience. They want all the spam pages to rank, they don’t want Google to not bother indexing them!

      • The canonical tag used incorrectly can definitely be black hat. I have seen companies try and canonical all their pages to their home page….basically giving all link juice to the home page. This is one thing I have seen TANK your rank on Google. Bad Bad Bad.

        The canonical tag has it’s place…just as Linda has described above….but use it improperly and you’ll get the wrath. I’ve also seen people put it in place of a 301…while this CAN work in theory, best to just do 301 and not chance it being implemented incorrectly.

  4. Google in as such protect links and algorithm. It detects Anchor Text, appearance of page to investigate which happens organically. They also look for anchor text like best widget in several places. If articles are syndicated, Google can determine “canonical” version.

  5. frankj says:

    What matters most is the use of software to do the same as SEO done manually.

  6. Steph says:

    The “Duplicate Content Across Domains” issue is a big one for us, as we serve content for Canada and Quebec (the french speaking province). I think I’ll do more reading, but the general census is that you are right, I just need how to use the google webmaster tools to tell them how we are geotargeting.

  7. Very interesting post, thanks for sharing. I didn’t know too much about black-hat and knew I didn’t want to get into it either.

Leave a Reply

© 2014 Get Elastic Ecommerce Blog. All rights reserved. Site Admin · Entries RSS · Comments RSS