Center for Cyber Social Dynamics Fellows


Jayhawk logo

Jack Horner

Software, Measuring, and Trust

Jack Horner is a retired software and systems engineer, and former profit-and-loss-center manager in a Fortune 500 systems integration and development company.  His 40 years of development experience includes scientific supercomputing applications and tool development, command-and-control, sensor, flight software, satellite-based navigation, and enterprise systems software development. He has thirteen years of teaching introductory physics, math, philosophy, and computer science. He has 170 publications in refereed venues.  His current research interests include automated theorem proving applications in mathematics, and characterizing the limits of software verification.

Denisa Kera

Denisa Kera

Philosophy, human-technology relationship, blockchain technologies

Denisa Kera is a philosopher and designer that experiments with various creative strategies of public engagement in emerging science and technology issues. She uses design methods (UX, critical design, design fiction, future scenarios, participatory design), ethnography and prototyping to research STS (Science, Technology and Society) issues. She spent the last decade as an Assistant Professor at the National University of Singapore, Senior Lecturer of Future Design in Prague College, and most recently as a Visiting Assistant Professor in Arizona State University. Currently, she is based in the BISITE group as a Marie Curie Research Fellow working on Distributed Ledger Technologies (blockchain).

Jayhawk

Justin Mullins

Philosophy, Machine Learning, Trustworthy AI

Justin is a machine learning researcher and philosopher with a diverse academic background. His research interests include the foundations of machine learning, AI epistemology, and the development of trustworthy AI systems. He has a PhD in philosophy, focusing on the history of analytical philosophy, philosophy of science, and mathematical logic.

Najarian Peters

Najarian R. Peters

Najarian R. Peters is a scholar and law professor who researches in the areas of privacy, torts, education, artificial intelligence governance (AI) ethics, and governance.  Professor Peters has been a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University since 2019.  Her work centers liberatory and human dignity centered frameworks and is interdisciplinary. She provides continuing legal education in AI Ethics and has been invited to present her AI and child privacy research both domestically and internationally including the Gamm Symposium at Tulane University and the Second Istanbul Privacy Symposium: Law, Ethics, and Technology, Istanbul Bilgi University, Istanbul, Turkey.  She developed and designed a new course for the KU honors program called AI Governance and Privacy Law.  Her forthcoming publication co-authored with David Tamez, PhD is titled Liberatory Technologies:  A Framework for AI Legitimacy in Education.  Her law articles and essays have been published in national and international journals including the Michigan Journal of Race & Law, University of California Law Review, Washington & Lee Law Review, and Seton Hall Law Review, as well as the 5Rights Foundation’s Digital Futures Commission publication Education Data Futures: Critical, Regulator and Practical Reflections. Professor Peters created the privacy focused conference called PrivacyPraxis in 2020 and co-designed the Wellness in Democracy series at KU that she has co-hosted since 2022, focused on misinformation and disinformation. Professor Peters is the inaugural AI Governance and Law fellow at the Center for Cyber Social Dynamics.

Petr Spelda

Petr Spelda

Machine learning, AI alignment, and Human-Tech relationship

I am an Assistant Professor at the Department of Security Studies, Charles University, Prague.

I am interested in safe machine learning and the means to achieve it. Most of my works deal with inductive inference in various learning frameworks. I tend to believe that the formal study of inductive inference is important for the safety of artificial intelligence.

I am writing a book on AI alignment and (social) preference learning.

I collaborate with Vit Stritecky, my colleague from Charles, and John Symons from The University of Kansas.

David Westbrook

David Wesbrook

Law, Socical Dynamics, and Emerging Technologies

David A. Westbrook thinks and writes about the social and intellectual consequences of contemporary political economy. His work influences numerous disciplines, including law, economics, finance, sociology, anthropology, cultural studies and design. He has spoken on six continents to academics, business and financial leaders, members of the security community, civil institutions and governments, often with the sponsorship of the U.S. State Department.