The chief executive of Google parent company Alphabet backed an EU proposal to temporarily ban facial-recognition technology because of the possibility that it could be used for nefarious purposes.
“I think it is important that governments and regulations tackle it sooner rather than later and gives a framework for it,” Sundar Pichai told a conference in Brussels organized by think tank Bruegel.
The European Commission, which acts as the EU executive, is taking a tougher line on artificial intelligence (AI) than the United States that would strengthen existing regulations on privacy and data rights, according to an 18-page proposal paper seen by Reuters.
Part of this includes a moratorium of up to five years on using facial recognition technology in public areas, to give the EU time to work out how to prevent abuses, the paper said.
“It can be immediate but maybe there’s a waiting period before we really think about how it’s being used,” Pichai said. “It’s up to governments to charter the course” for the use of such technology.
Pichai urged regulators to take a “proportionate approach” when drafting rules, days before the Commission is due to publish proposals on the issue.
Regulators are grappling with ways to govern AI, encouraging innovation while trying to curb potential misuse, as companies and law enforcement agencies increasingly adopt the technology.
There was no question AI needs to be regulated, Pichai said, but rulemakers should tread carefully.
“Sensible regulation must also take a proportionate approach, balancing potential harms with social opportunities. This is especially true in areas that are high risk and high value,” he said.
Regulators should tailor rules according to different sectors, Pichai said, citing medical devices and self-driving cars as examples that require different rules.
He urged governments to align their rules and agree on core values.
Earlier this month, the U.S. government published regulatory guidelines on AI aimed at limiting authorities’ overreach, and urged Europe to avoid an aggressive approach.
Pichai said it was important to be clear-eyed about what could go wrong with AI, and that while it promised huge benefits there were real concerns about potential negative consequences.
One area of concern is so-called “deepfakes” – video or audio clips that have been manipulated using AI and which potentially could be created of any individual, saying anything in any setting.
Pichai said Google had released open datasets to help the research community build better tools to detect such fakes.
The world’s most popular internet search engine said last month that Google Cloud was not offering general-purpose facial-recognition application programming interfaces (APIs) while it establishes policy and technical safeguards.