Digital Commons @ St. Norbert College - The Killeen Chair of Theology & Philosophy Lecture Series: Existential Risk and the Artificial Will
 

Existential Risk and the Artificial Will

Error loading player: No playable sources found
 

About the Speaker

J. Dmitri Gallow, senior research fellow of the Dianoia Institute of Philosophy at Australian Catholic University, focuses on the metaphysics of causation and chance, the rational norms governing credence and choice, and the connections between those topics. He also has an interest in the philosophy of science, metaphysics, epistemology, ethics, the philosophy of language, the philosophy of economics and logic. He previously taught philosophy at the University of Pittsburgh and at New York University.

Start Date

11-14-2023 7:00 PM

Description

In keeping with this theme, Gallow will discuss the process of creating artificial minds as humanity adds intelligence into machines. Some have argued that the uncertainty regarding the development of autonomous wills poses an existential risk to humanity, for they allege that most of the desires an artificial agent could develop will make it rational for them to take steps to disempower humanity. Gallow will investigate this thesis using the tools of rational choice theory: If an artificial agent’s desires are sampled randomly, they will be somewhat more likely to make choices that leave less up to chance, which afford them more choices later on, and which prevent their desires from being changed.

J. Dmitri Gallow

Share

COinS
 
Nov 14th, 7:00 PM

Existential Risk and the Artificial Will

In keeping with this theme, Gallow will discuss the process of creating artificial minds as humanity adds intelligence into machines. Some have argued that the uncertainty regarding the development of autonomous wills poses an existential risk to humanity, for they allege that most of the desires an artificial agent could develop will make it rational for them to take steps to disempower humanity. Gallow will investigate this thesis using the tools of rational choice theory: If an artificial agent’s desires are sampled randomly, they will be somewhat more likely to make choices that leave less up to chance, which afford them more choices later on, and which prevent their desires from being changed.