Most people involved in organic search in one way or another will know that Google's crawling infrastructure is located in the US. I came across a somewhat odd counter-example today. One tool on this site shows a user's IP address. I sometimes use this as a quick check of a Googlebot IP - since I can see the IP crawled from in the text snippet. Checking this out today, I came across an unusual IP in the snippet, which geo-locates to China:
Or: What is it with all of the weird Google Books/YouTube results?
If you're an advanced SEO or have an interest in search engines, then you're more than likely well-acquainted with the various advanced search operators, from getting site-specific results from site: and link: searches, to finding relationships between words with the tilde, or even querying particular parts of the index with operators like intitle:
All of the major search engines recently agreed on an element to address the problems webmasters experience related to duplicate content. I feel their pain, and while I dislike proprietary features (since they don't address the underlying problem) it seems like it might be a useful tool in certain circumstances.
Certainly, there are times when duplicates are present, and either cannot or will not be fixed (for various reasons). If we can move some of the burden from developers and onto others tasked with SEO, then that can't be a bad thing.