Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: 'There Has To Be Consequences'在接受“60分钟”,谷歌CEO Sundar Pichai said AI is the most "profound technology humanity is working on — more profound than fire or electricity."
The capabilities ofartificial intelligence— and the speed at which the technology is being released to the public — are garnering a mix of reactions from tech enthusiasts, CEOs, and experts.
For Google CEO Sundar Pichai, AI is an increasingly important aspect of Google's business — the company released its AI chatbot,Bard, in February and has other projects on the horizon, like a prototype called "Project Starlink," which aims to enhance video conferencing by simulating a more life-like experience.
In an interviewwith "60 Minutes" on Sunday, Pichaisaid AI is one of the most significant discoveries of our time.
"I have always thought of AI as the most profound technology humanity is working on — more profound than fire or electricity," Pichai said in the interview. "We are developing technology that will be far more capable than anything we have ever seen before."
Pinchai told the program that there should be government regulation of AI, especially with the emergence of deep fakes,saying the approach to the technology would be "no different" from the way the company tackled spam and Gmail.
Related:We Asked Google's AI Bard How To Start A Business. Here's What It Said.
"We are constantly developing better algorithms to detect spam," Pichai said. "We would need to do the same thing with deep fakes, audio, and video. Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society."
In March, in anopen lettersigned by tech leaders (notablyElon Muskand Apple co-founder Steve Wozniak) and CEOs called for a six-monthpause on AI developmentto manage and assess potential risks. To date, the letter has over 26,000 signatures.
Related:Bill Gates Doesn't Agree With The Movement to Pause AI Development — Here's Why