We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!
It was a week of AI news from Google’s annual I/O developer conference and IBM’s annual THINK conference. But there were also big announcements from the Biden administration about the use of AI tools in recruiting and employment, while also making it difficult to Clearview AI’s Settlement of a lawsuit filed by the ACLU in 2020.
Let’s dive in.
Last week I published an editorial, “5 Ways to Tackle Regulations on AI-Enabled Hiring and Employment,” which jumped off the news that last November, the New York City Council decided to first account in the US to broadly address the use of AI in recruitment and employment.
In addition, last month California introduced The Workplace Technology Accountability Act, or Assembly account 1651† The bill proposes that employees be informed prior to the collection of data and the use of monitoring tools and the deployment of algorithms, with the right to access and correct collected data.
This week that story got a big sequel: on Thursday Biden Administration Announced that “employers who use algorithms and artificial intelligence to make hiring decisions risk violating the American with Disabilities Act if it disadvantages job applicants with disabilities.”
if reported by NBC NewsKristen Clarke, the assistant attorney general for civil rights at the Department of Justice who made the announcement along with the Equal Employment Opportunity Commission, has said there is “no doubt” that increased use of the technologies is making “some of the persistent discrimination.”
What does Clearview AI’s settlement with the ACLU mean for enterprises?
On Monday, facial recognition company Clearview AI, which made headlines for selling access to billions of facial photos, settled a lawsuit filed in Illinois two years ago by the American Civil Liberties Union (ACLU) and several other nonprofit organizations† The company was accused of violating an Illinois state law called the Biometric Information Privacy Act (BIPA). Under the terms of the settlement, Clearview AI has agreed to permanently ban most private companies from using its service.
But many experts pointed out that Clearview has little to worry about with this ruling, as Illinois is one of the few states to have such biometric privacy laws.
“It’s largely symbolic,” said Slater Victoroff, founder and CTO of Indico Data. “Clearview is very strongly connected politically and so unfortunately their business will do better than ever as this decision is limited.”
Still, he added that his response to the Clearview AI news was “relief.” The US was, and still is, in a “weak and unsustainable place” on consumer privacy, he said. “Our laws are a messy patchwork that cannot withstand modern AI applications, and I am pleased that some progress has been made towards certainty, even if it is only a small step. I would very much like to see the US enshrine effective privacy in law after the recent lessons of the GDPR in the EU, rather than continue to pass blame.”
AI regulation in the US is the ‘Wild West’
When it comes to AI regulation, the US is really the “wild west,” Seth Siegel, global head of AI and cybersecurity at Infosys Consulting, told VentureBeat. The bigger question now, he said, should be how the US will deal with companies that collect the information that violates the terms of service from sites where the data is highly visible. “Then you have the question of the definition of publicly available – what does that mean?” he added.
But for large enterprises, the current problem is reputation risk, he explains: “If their customers learned about the data they use, would they still be a trusted brand?”
AI suppliers should be careful
Paresh Chiney, partner at global consulting firm StoneTurn, said the settlement is also a warning sign for enterprise AI vendors, who should “act with caution” — especially if their products and solutions risk breaking data privacy laws and regulations. violate.
And Anat Kahana Hurwitz, head of legal data at justice information platform Darrow.ai, pointed out that all AI vendors that use biometrics could be affected by the Clearview AI ruling, so they must comply with the Biometric Information Privacy Act (BIPA). ), which passed in 2008, “when the AI landscape was completely different.” The act, she explained, defined biometric identifiers as “retina or iris scan, fingerprint, voice print, or hand or face geometry scan.”
“This is legislative language, not scientific language — the scientific community does not use the term ‘facial geometry’ and it is therefore subject to the interpretation of the court,” she said.
The mission of VentureBeat should be a digital city square for tech decision makers to gain knowledge about transformative business technology and transactions. Learn more about membership.