So you learn something new everyday and in the cyber world of constant changes and newly coined terms at every corner, today it was the term "blog scraping."
Blog scraping is the process of scanning through a large number of blogs daily looking for content that is copyrighted and reposting that information on other blogs. It's generally done through automated software and is a clear form of copyright infringement. The really bad thing is, not only it's illegal, the scraped posts often show up in RSS feeds on other websites and in subscriber emails.
Then there's the issue of scraped content actually showing up in search ahead of the legitimate, first posting of the information. Now is where we get into Google territory. Today Matt Cutts announced that Google is going to attempt to create an algorithm aimed at stopping scraped content from appearing in the top rankings (if at all).
Naturally this should cause some degree of concern because if your blog is wrongly pegged as being scraped you'll lose whatever high rankings you currently enjoy. Another thing that could happen is that legitimately quoted text from other blogs (when the source is clearly given credit) might trigger the algorithm inadvertently.
As with every good intention, there comes a bit of bad outcome. (Think Panda) Will this be another instance of oops! dinged the wrong website? We shall have to wait and see. In the meantime, if you know of an instance of scraping visit this link and leave your info.