Workshop Abstracts
Day 1 Abstracts
Caroline Arruda, Tulane University
TBA
Sven Nyholm, LMU
Alan Turing famously suggested that we should replace the question of whether machines can think with the question of whether machines can convincingly imitate intelligent human behavior. This so-called Turing test – which Turing himself called the imitation game – has in effect recently been updated in a fascinating way with the advent of large language models that can be fine-tuned to make them imitate particular people. Such digital duplicates of real people could be used to write papers in our own or other people’s style, to predict our preferences while we are in a coma, to go to boring meetings or job interviews instead of us, to provide entertainment, and so on. This could be done with or without the consent of the people being Digitally duplicated or imitated, with or without the knowledge of people who might be interacting with such digital duplicates, and so on. In my presentation, I will draw on recent joint work together with John Danaher to reflect on the permissibility and desirability of digital duplicates of particular people and identify some basic deontic and axiological considerations that bear on the permissibility and desirability of digital duplicates.
Eric Schwitzgebel, UC Riverside
If we create genuinely conscious, social AI systems, it will be tempting to make them safe and deferential. Safe systems are “aligned” with human interests: They are unlikely to harm us and they act to advance rather than thwart human interests. Deferential systems adhere to the commands and preferences of their owners or users (within the bounds of safety). However, some genuinely conscious, social AI systems would, and ought to be, our moral equals. If so, building them to be safe and deferential would be to create a race of second-class citizens who inappropriately devalue themselves. One might argue on consequentialist grounds that as long as such entities are (a.) happy and (b.) wouldn’t otherwise exist, there is no intrinsic wrong in bringing them into existence. I present several thought experiments to the contrary, arguing that such cases are the moral equivalents of human slavery, organ farming, and wrongful servitude.
Jessica Morley, Yale
This talk will give an overview of what algorithmic clinical decision support software is and outline how and why it poses a threat to the therapeutic relationship by challenging patient and clinician autonomy as well as epistemic certainty, before explaining why this matters and what can be done about it.
Day 2 Abstracts
David Tamez, KU (I2S)
In Law and Social Cognition (2013), Frederick Schauer and Barabara Spellman begin their expository by stating the clear influence legal institutions have on the social cognitive functions of human agents. Certainly, much ink has already been spilt discussing the dangerous assumption within the legal institution that human rationality is to be thought unbounded. Research in behavioral psychology and economics, for example, has shown that humans not only make mistakes, but the mistakes they make are due to embedded cognitive quirks. Many of these quirks not only impede our abilities to make sound, practical decisions, but in other cases they also lead to us into breaking the law. With the onset of tools utilizing a host of emerging technologies collectively viewed as some form of artificial intelligence, two questions can be asked. First, we might ask in what ways AI can improve our decision-making and, particularly, our observance of laws? Second, in what ways might AI impede our abilities to be law-abiding citizens? In this paper, I explore the latter of these questions focusing primarily on the influence AI tools can have on our epistemic environments, and the potential ramifications this may have on our legal cognition.
Joshua Rust, Stetson University
Standard notions of full-fledged agency are not able to capture all agentive phenomena, motivating the development of a broader, more minimal conception of agency. Proposing the "precedential account" of minimal agency, I consider the extent to which this account applies, not just to a broad swath of living systems, including single-celled organisms, but to two categories of artificial systems - social institutions and Large Language Models.
Anna Strasser, LMU
In recent years, generative AI technology has given rise to impressive progress in Natural Language Processing (NLP), and especially after the release of GPT-3 and ChatGPT, large language models (LLMs) have become a prominent topic of international public and scientific debates. Opinions on the potential of LLMs vary widely: it is unclear what capabilities can justifiably be attributed to them, and it does not seem possible to foresee where the further development of such models will lead. This talk investigates the status of LLMs in human-machine interactions (HMIs). What are we doing when we interact with LLMs? Are we just playing with an interesting tool? Are we somehow enjoying a strange way of talking to ourselves? Or are we, in some sense, acting jointly with a collaborator? Those questions lead to controversies about the classification of HMIs. I will argue that we might be witnesses to the emergence of phenomena that can be classified neither as mere tool use nor as proper social interactions but rather constitute something in between. Analyzing HMIs, one can observe that somehow, our sociality gains traction in such interactions, and our usual way of thinking about “tool use” does not fully do justice to the nature of these interactions. Exploring how one might revise our conceptual frameworks to account for such borderline phenomena, I will discuss the pros and cons of ascribing some form of (social) agency to LLMs and the possibility that future LLMs might be junior participants in asymmetric joint actions.
Cameron Buckner, University of Florida
Moral philosophy and philosophy of agency has long suggested that empathy and perspective-taking are key cognitive abilities involved in human moral and social cognition. Unsurprisingly, a number of machine learning theorists have thus attempted to make AI models "more empathetic" by training them to identify emotions and affective responses in humans. While this is a start towards modeling the role of empathy in human social cognition, it is only the earliest steps, and has often been considered only in service of making more engaging chatbots and assistants, rather than attempting to model mature human moral and social decision-making processes. I argue here that modeling the role that empathy should play in mature human moral and social decision-making involves a richer ability to engage in flexible perspective-taking with many different kinds of agents. This ability is akin to reasoning flexibly in spatial navigation tasks using cognitive maps, but requires instead requires agents to learn a map of social and moral space by learning about the experiences, values, and affective responses of many different kinds of social agents.