In a filing with the IRS, OpenAI Inc., OpenAI’s nonprofit org, disclosed that it awarded a grant to Duke University researchers for a project titled “Research AI Morality.”
An OpenAI press release indicated the award is part of a larger, three-year, $1 million grant to Duke professors studying “making moral AI.”
Little is public about this “morality” research OpenAI is funding, other than the fact that the grant ends in 2025. The study’s principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, told TechCrunch via email that he “will not be able to talk” about the work.
According to the press release, the goal of the OpenAI-funded work is to train algorithms to “predict human moral judgements” in scenarios involving conflicts “among morally relevant features in medicine, law, and business.”
Read more on ‘AI morality’ at the link in the bio
Article by Kyle Wiggers
Image Credits: Chip Somodevilla / Staff / Getty Images
#TechCrunch #technews #artificialintelligence #OpenAI #ChatGPT #generativeai
An OpenAI press release indicated the award is part of a larger, three-year, $1 million grant to Duke professors studying “making moral AI.”
Little is public about this “morality” research OpenAI is funding, other than the fact that the grant ends in 2025. The study’s principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, told TechCrunch via email that he “will not be able to talk” about the work.
According to the press release, the goal of the OpenAI-funded work is to train algorithms to “predict human moral judgements” in scenarios involving conflicts “among morally relevant features in medicine, law, and business.”
Read more on ‘AI morality’ at the link in the bio
Article by Kyle Wiggers
Image Credits: Chip Somodevilla / Staff / Getty Images
#TechCrunch #technews #artificialintelligence #OpenAI #ChatGPT #generativeai
In a filing with the IRS, OpenAI Inc., OpenAI’s nonprofit org, disclosed that it awarded a grant to Duke University researchers for a project titled “Research AI Morality.”
An OpenAI press release indicated the award is part of a larger, three-year, $1 million grant to Duke professors studying “making moral AI.”
Little is public about this “morality” research OpenAI is funding, other than the fact that the grant ends in 2025. The study’s principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, told TechCrunch via email that he “will not be able to talk” about the work.
According to the press release, the goal of the OpenAI-funded work is to train algorithms to “predict human moral judgements” in scenarios involving conflicts “among morally relevant features in medicine, law, and business.”
Read more on ‘AI morality’ at the link in the bio 👆
Article by Kyle Wiggers
Image Credits: Chip Somodevilla / Staff / Getty Images
#TechCrunch #technews #artificialintelligence #OpenAI #ChatGPT #generativeai
·371 Views
·0 Reviews