Stuart Russell, a professor at UC Berkeley and a leading expert on artificial intelligence, is expected to take the stand this week or next on behalf of plaintiff Elon Musk, who is suing his former partners at OpenAI, the AI lab he co-founded in 2015, alleging they “looted” the nonprofit company for their own gain by converting it to a for-profit venture, now valued at $852 billion.
Steve Molo, an attorney for Musk, argued testimony about AI risks—including potential catastrophic-climate scenarios referenced in the witness's report—should be allowed in the trial.
“It’s more than a little ironic to keep the jury from understanding what are scientifically accepted [risks],” Molo protested.
“He can testify to the risks I’ve found credible,” Gonzalez Rogers responded, siding with defense attorneys, who had lobbied to bar such testimony.
AGI is generally understood as the hypothetical point at which AI reaches or surpasses human cognitive abilities and can operate autonomously, which many experts warn poses an existential threat to humanity.
Musk said he conceived of OpenAI as a counterweight to Google’s DeepMind AI project, after realizing his close friend and Google co-founder Larry Page was insufficiently concerned about the risks.
“Those are real risks,” Molo said Thursday, referring to potential extinction scenarios experts have projected could result from the runaway advancement of superhuman digital intelligence.
“I believe you may believe that,” the judge said.
“I more than believe that. It is the opinion of many experts all over the world,” Molo said, pointing to Russell as the world’s “foremost expert” in artificial intelligence
“This is a real risk, we all could die. We all could die because of AI,” Molo said, suggesting OpenAI positions itself as a company that is trying to prevent such risks.
“It is also ironic that your client, despite these risks, is eviscerating a company that’s in the exact space. I suspect there are plenty of people who don’t want to put the future of humanity in Mr. Musk’s hands,” Gonzalez Rogers. “But it doesn’t matter, we aren’t going to get into those issues.”
Gonzalez Rogers said she didn’t want the issue exploited in her courtroom “for the world to see,” at least not in this trial, in which legal claims relate to violation of a charitable trust.
“There are risks, sure. This is not a trial on the safety risks of artificial intelligence. This is not a trial on whether or not AI has damaged humanity. It could be that one day in a federal court in this country we have that trial. That is not this trial. We are not going to get sidetracked on that issue in this trial.”
Gonzalez Rogers acknowledged safety is an issue, “but there are boundaries.”
