Google's view of duplicate content
Google's view on duplicate content is one that aims to reduce the use of 'cookie-cutter' pages to create multiple pages with very similar content, or 'screen scraping' where websites will directly copy information from other sites, which can be common for affiliate marketers. The bottom line is that Google wants to index original content, although recognises that this is not always possible for some websites, particularly those generating content dynamically.
Google's webmaster guidelines state that 'duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results'. There is further information here about this issue and ways to avoid it.
Like most search engines, Google aims to present a degree of variety within the search results and they will therefore filter out duplicate documents so that users experience less redundancy. This is done in a number of ways, such as grouping duplicate URLs into one cluster, or selecting what is seen to be the "best" URL to represent the duplicated cluster in search results and then by consolidating the properties of the URLs in the cluster, such as link popularity, to the representative URL.
In summary, this post says that Google is unlikely to implement any form of penalty unless it is decided that a website is duplicating deliberately - rather one version of the duplicated content is seen to be the 'best' option to be displayed within the ranking results.