​Wednesday 30, November | Via zoom

Link: https://us02web.zoom.us/j/85848792602?pwd=VHdRYWFsNitlanp1am1Ya1pIcDFUdz09

​​

Advanced automation and intelligent systems have become a major part of our life. As these systems become more sophisticated, there is a “responsibility gap” in the ability to divide causal responsibility for outcomes between humans and systems. To bridge this gap, we developed a theoretical Responsibility Quantification (ResQu) model of human responsibility in intelligent systems. The model provides a method for computing the human’s causal responsibility for outcomes, considering characteristics of the humans, the intelligent system and the situation. It can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems.

In a series of laboratory studies, we assessed the descriptive abilities of the ResQu model, given that people may not necessarily act as prescribed by economic theory. The model predictions were strongly correlated with both the measured responsibility participants took on and their subjective assessments of responsibility.

The research implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their relative causal responsibility will often be small, even if formally the human is assigned a major role in the system. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes.