One of the most talked about and disputed topics in search engine optimization is how search engines handle duplicate content. It’s exciting when we find new research from a Google or Yahoo that discusses the topic.

If you have a Web page, and someone scrapes your site, and copies your page, but with ads upon it, how does a search engine find out? Does the search engine even care? If you write an article, and syndicate it, with links back to your web page, will there be problems?

News wire services send out copies of the same story to hundreds of news papers, many of which are online. The titles to those stories may change from paper to paper, and the stories may be edited - most commonly shortened rather than added to, but some papers will include more information in some stories after doing some investigative reporting. Is there a good reason to keep some of those near duplicates and index them?

Get the full story at Cre8tive Flow

Read also "The dublicate content penalty myth"