Google and Alphabet CEO Sundar Pichai believes that international alignment will be critical to creating a global regulatory standard for the development of AI.
Artificial intelligence is “too important” not to be regulated because of the damage it could cause if left unchecked, the boss of Google wrote in an op-ed published in the Financial Times.
Pichai calls AI “one of the most promising new technologies,” but also highlights possible risks that could come with careless use of AI, naming a few historical examples in which breakthrough new tech brought with it new issues.
“History is full of examples of how technology’s virtues aren’t guaranteed,” he wrote. “The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”
And while Pichai isn’t alone in his opinion — the EU, the U.S., and Australia, among others, are currently drafting proposals for AI regulation — the Alphabet CEO thinks that how we approach AI regulation is an equally important mission.
“The EU and the U.S. are already starting to develop regulatory proposals,” he wrote. “International alignment will be critical to making global standards work. To get there, we need agreement on core values.”
In 2018, Google created its own AI policies to provide guidance as well as open-source tools and code for the ethical development of AI that also avoids bias and ensures privacy. The policy also outlined Google’s opposition to mass surveillance and the infringement of human rights.
“We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes,” writes Pichai. “Government regulation will also play an important role.”
He pointed to Europe’s General Data Protection Regulation (GDPR) as being a good start for a “strong foundation,” and states that Google wants to partner with regulators to extend its own expertise and tools and “navigate these issues together.”
You can read Pichai’s entire op-ed over at Financial Times (paywall).