Facebook Facial Recognition Can Now Identify Faces With Almost Perfect Accuracy
March 28, 2014 | Tom Olago
Share this article
Facebook says that after training it’s facial software on “the largest facial dataset to-date,” its facial recognition software is now nearly as good as a human being at detecting and recognizing other people’s faces.
Their dataset includes some “four million facial images belonging to more than four thousand identities” and the facial detection software “involves more than 120 million parameters”. Their recognition program can now accurately detect and identify human faces with 97.25 percent accuracy, which is said to be just a fraction shy of human abilities.
Estimated at being about 25 percent more accurate than the current generation of facial detection software, it brings facial recognition capabilities to new ‘near perfect’ levels, as recently reported by the Washington Free Beacon following the announcement by Facebook.
According to Culturemob.com, the system developed in a project called DeepFace was passed through a set of standardized tests, parks and humans. According to the team of developers, the system was taught to correctly identify facial elements using a collection of 4.4 million pre-tagged faces, representing a group of 4030 Facebook users. Facebook administrators will wait first feedback from the research community, and then they will evaluate options for the large-scale deployment.
The current facial recognition system that was expanded globally in 2011, was hotly contested by the authorities in Germany and Ireland for the abusive way in which it was applied without requiring its consent or at least to warn users. In its defense, Facebook officials said that facial recognition system is designed to expedite photos tagging, saving users an extra effort.
So quite unsurprisingly, concerns over user privacy and potential unauthorized or unethical use have been brought to the fore. Facebook users seem to essentially asking why their photographic and other information that should be safe within the boundaries of the predefined Facebook privacy settings, should then end up being used elsewhere, or used differently without their knowledge or consent.
And as much as criminals, terrorists and law breakers in general should not be allowed to enjoy immunity from arrest or prosecution by claiming privacy rights in social media, the majority of users who are not in any of these excluded categories should definitely be kept safe from privacy violations.
Interestingly, Google had trodden down the same path that Facebook is now going down, and then backed off. This was published in a recent report by popularmechanics.com titled 8 Weird Ways People Are Using Facial Recognition Software that highlights some of the concerns and risks brought on by facial recognition software.
According to the report, Google said it killed a facial recognition search engine. Citing privacy concerns, it also backed off a plan to incorporate facial recognition software into Google Goggles, its image auto-recognition app. However, Google might be considering other adventures in facial recognition. It bought PittPatt, a facial recognition company developed by another group of researchers, in late July 2011, and hasn't said what it plans to do with it.
Examples of other currently available facial recognition technology uses are also highlighted in the report. They include:
- Using online dating websites such as findyourfacemate.com to match people based primarily on "facial compatibility."
- Other sites use facial recognition to even match people with pets they could potentially adopt. A similar use relates to identifying animals, such as chimpanzees to assist researchers for study and statistical purposes.
- iPhone Dragnet: Police departments nationwide are using MORIS (MObile Recognition and Information System), a device that slides over an iPhone. It can take fingerprints, retinal scans and use facial recognition analysis of pictures to ID people.
Observers seem to agree that the main concern facebook users should have is whether Facebook privacy controls are adequate to safeguard your photo–related information and sharing, and that the overall issue is about how much Facebook can be trusted not to abuse your privacy.
The dangers had started to be foreseen several years back. In an article published by cio.com, titled ‘Facebook Facial Recognition: Why It's a Threat to Your Privacy’, Bill Snyder highlights 3 main issues: “Privacy Abuse Pays: The social networking giant has fooled us over and over again, blithely exposing users' private information to any advertiser who happens to get interested.
It's a tired drama. Facebook messes up, they get caught, the media freaks out, Facebook apologizes. Then the cycle starts all over, as it did last year when the Wall Street Journal learned that it's not just Facebook that's harvesting personal data but Facebook's platform developers as well.
That data, some of which made it possible to identify specific users, was being shared with advertisers and Internet tracking companies, whether those users had opted for privacy or not. Why would Facebook do such a thing? In a word, money.
There's a wonderfully symbiotic relationship between Facebook and the major app developers...Everybody has an incentive to just get along and keep on raking in the bucks.” The second one is regarding Facebook's privacy settings, where “...data is shared by default, meaning you've opted in unless you've explicitly opted out.
That's exactly what's happening with facial recognition. Facebook has automatically opted you in, which means your friends will see suggestions of photos in which to tag you, unless you change the setting.” The third one relates to potential law enforcement abuses or excesses: “It is mighty easy for the feds and even local cops to get their hands on all sorts of records you might have thought were private or impossible to find.
Federal officials, for example, have been grabbing location data harvested from cell phone towers for some time without getting an OK from a judge.” All indications are that the enhancement of facial recognition by Facebook to near-perfection will work very well for Facebook’s bottom line and its potential privacy violations, but will push its users one step closer towards excessive loss of digital privacy.
It will be well worth finding out some ways you can fight back to minimize the general extent of your privacy violations – or otherwise leave yourself totally vulnerable.
Read more at http://www.prophecynewswatch.com/2014/March28/283.html#jcK0BKPOF3YQrtdp.99
March 28, 2014 | Tom Olago
Share this article
Facebook says that after training it’s facial software on “the largest facial dataset to-date,” its facial recognition software is now nearly as good as a human being at detecting and recognizing other people’s faces.
Their dataset includes some “four million facial images belonging to more than four thousand identities” and the facial detection software “involves more than 120 million parameters”. Their recognition program can now accurately detect and identify human faces with 97.25 percent accuracy, which is said to be just a fraction shy of human abilities.
Estimated at being about 25 percent more accurate than the current generation of facial detection software, it brings facial recognition capabilities to new ‘near perfect’ levels, as recently reported by the Washington Free Beacon following the announcement by Facebook.
According to Culturemob.com, the system developed in a project called DeepFace was passed through a set of standardized tests, parks and humans. According to the team of developers, the system was taught to correctly identify facial elements using a collection of 4.4 million pre-tagged faces, representing a group of 4030 Facebook users. Facebook administrators will wait first feedback from the research community, and then they will evaluate options for the large-scale deployment.
The current facial recognition system that was expanded globally in 2011, was hotly contested by the authorities in Germany and Ireland for the abusive way in which it was applied without requiring its consent or at least to warn users. In its defense, Facebook officials said that facial recognition system is designed to expedite photos tagging, saving users an extra effort.
So quite unsurprisingly, concerns over user privacy and potential unauthorized or unethical use have been brought to the fore. Facebook users seem to essentially asking why their photographic and other information that should be safe within the boundaries of the predefined Facebook privacy settings, should then end up being used elsewhere, or used differently without their knowledge or consent.
And as much as criminals, terrorists and law breakers in general should not be allowed to enjoy immunity from arrest or prosecution by claiming privacy rights in social media, the majority of users who are not in any of these excluded categories should definitely be kept safe from privacy violations.
Interestingly, Google had trodden down the same path that Facebook is now going down, and then backed off. This was published in a recent report by popularmechanics.com titled 8 Weird Ways People Are Using Facial Recognition Software that highlights some of the concerns and risks brought on by facial recognition software.
According to the report, Google said it killed a facial recognition search engine. Citing privacy concerns, it also backed off a plan to incorporate facial recognition software into Google Goggles, its image auto-recognition app. However, Google might be considering other adventures in facial recognition. It bought PittPatt, a facial recognition company developed by another group of researchers, in late July 2011, and hasn't said what it plans to do with it.
Examples of other currently available facial recognition technology uses are also highlighted in the report. They include:
- Using online dating websites such as findyourfacemate.com to match people based primarily on "facial compatibility."
- Other sites use facial recognition to even match people with pets they could potentially adopt. A similar use relates to identifying animals, such as chimpanzees to assist researchers for study and statistical purposes.
- iPhone Dragnet: Police departments nationwide are using MORIS (MObile Recognition and Information System), a device that slides over an iPhone. It can take fingerprints, retinal scans and use facial recognition analysis of pictures to ID people.
Observers seem to agree that the main concern facebook users should have is whether Facebook privacy controls are adequate to safeguard your photo–related information and sharing, and that the overall issue is about how much Facebook can be trusted not to abuse your privacy.
The dangers had started to be foreseen several years back. In an article published by cio.com, titled ‘Facebook Facial Recognition: Why It's a Threat to Your Privacy’, Bill Snyder highlights 3 main issues: “Privacy Abuse Pays: The social networking giant has fooled us over and over again, blithely exposing users' private information to any advertiser who happens to get interested.
It's a tired drama. Facebook messes up, they get caught, the media freaks out, Facebook apologizes. Then the cycle starts all over, as it did last year when the Wall Street Journal learned that it's not just Facebook that's harvesting personal data but Facebook's platform developers as well.
That data, some of which made it possible to identify specific users, was being shared with advertisers and Internet tracking companies, whether those users had opted for privacy or not. Why would Facebook do such a thing? In a word, money.
There's a wonderfully symbiotic relationship between Facebook and the major app developers...Everybody has an incentive to just get along and keep on raking in the bucks.” The second one is regarding Facebook's privacy settings, where “...data is shared by default, meaning you've opted in unless you've explicitly opted out.
That's exactly what's happening with facial recognition. Facebook has automatically opted you in, which means your friends will see suggestions of photos in which to tag you, unless you change the setting.” The third one relates to potential law enforcement abuses or excesses: “It is mighty easy for the feds and even local cops to get their hands on all sorts of records you might have thought were private or impossible to find.
Federal officials, for example, have been grabbing location data harvested from cell phone towers for some time without getting an OK from a judge.” All indications are that the enhancement of facial recognition by Facebook to near-perfection will work very well for Facebook’s bottom line and its potential privacy violations, but will push its users one step closer towards excessive loss of digital privacy.
It will be well worth finding out some ways you can fight back to minimize the general extent of your privacy violations – or otherwise leave yourself totally vulnerable.
Read more at http://www.prophecynewswatch.com/2014/March28/283.html#jcK0BKPOF3YQrtdp.99
No comments:
Post a Comment