What Happens When Our Faces Are Tracked Everywhere We Go?


What Happens When Our Faces Are Tracked Everywhere We Go? (New York Times)

A long-read on Clearview.AI and similar companies that push (and cross) the boundaries of privacy and ethics:

‘The more society-changing aspect of facial recognition may be how private companies deploy it …. imagine what you would do with a face-identifying app on your phone: a ‘Shazam for people’. You would never forget someone’s name at a party again. If that pseudonymous troll on Twitter who said something nasty to you had ever tweeted a selfie, you could find out who he or she was. You could take a photo of the strangers at your poker table and know if they’re pros or not. It might just be your new favorite app.’

Sounds useful, or just scary?

Kashmir Hill via nytimes.com, with great artwork by Zach Lieberman
.


A.I.’s Most Important Dataset Gets a Privacy Overhaul, a Decade Too Late


A.I.’s Most Important Dataset Gets a Privacy Overhaul, a Decade Too Late (Medium)

ImageNet is arguably the most important dataset in recent A.I. history. It’s a collection of millions of images that were compiled in 2009 to test a simple idea: If a computer vision algorithm had more examples to learn from, would it be more accurate?
It is no exaggeration to say that the results from the ImageNet competition gave rise to the A.I. boom we’re in today.
Now, more than a decade after its debut, ImageNet’s authors have made a tweak to the dataset that changed the world: They’ve blurred all the faces.
“Today, computer vision is in real-world systems impacting people’s Internet experience and daily lives. An emerging problem now is how to make sure computer vision is fair and preserves people’s privacy.”

Dave Gershgorn via onezero.medium.com

.


Empiric Logic wins grant to secure data privacy in genomics research


Empiric Logic wins grant to secure data privacy in genomics research (Silicon Republic)

It is crucial that this gets solved:
“Ensuring the privacy and security of sensitive genetic data should be a priority for companies working with this data type,” chief executive Gareth O’Sullivan said. “However encryption, pseudonymisation and other state-of-the-art privacy and security controls might not necessarily prevent individuals from being re-identified from their genetic data.”
‘One of the methods it will look at is ‘simulated representations’, where a simulation of the original data is created that cannot be sourced back to the identity of an individual from the original data.’

Jonathan Keane via siliconrepublic.com

 

.

 

 

.