The major search engines know there are many valid reasons for creating duplicate content. Affiliate sales channels need to be fed content from a centralized source. Syndication services must earn their keep by distributing similar content to dissimilar sites. Content management systems render page after page of mildly realigned content as a matter of cataloging efficiency. RSS feeds distribute another rendition of the same-old content to new venues. The list goes on.

Search engines also understand there are many not-so-valid reasons for duplicate content, like hallway pages, doorway pages, and multiple-domain microsites. Then there are the unscrupulous scrapers that snag someone else's content, duplicate it repeatedly, and interlink it in a wasteland of Web sites. These made-for-AdSense type of sites are just the sort of thing Google likes to keep out of its indices via penalties, filters, and dampening.

Google in particular understands the inherent differences between valid and invalid content duplication. Because there are many valid reasons for creating valid duplicate content, the effects aren't readily penalized, unless one considers Google's supplemental results a penalty.

Get the full story at ClickZ