As was suggested elsewhere, there's blacklisting if you're logged in. But that's rife for abuse.
It would take more horsepower, but Google has plenty of that. I'm sure someone at Google has thought of this; or is thinking about implementing this; or has completely dismissed this as impossible/ineffective; but I wanted to throw it out there to see what HN thinks about it.
Determine the canonical source. In this case, a Stack Overflow post. Each site that scrapes the content from the Stack Overflow post increases the site rank for said Stack Overflow post.
On one hand, it will increase any rank for original content that gets recycled and spread across the 'net: reblogged Tumblr posts; retweeted Tweets.
On the other hand, it puts another tool in the hands of black hats.
Thoughts?
Well, that's the hard bit, isn't it? What are the consequences of getting it wrong? If Google bans my site from their index because it thinks I stole my content from a scraper, that's going to be hard to take.