I2S Center for Cyber-Social Dynamics Receives Two Awards for Research Focused on Human-AI Interaction and LLM Vulnerabilities
The Center for Cyber-Social Dynamics (CCDS), a research center at KU’s Institute for Information Sciences (I2S), recently received support for two new research projects that seeks to develop a framework for understanding and shaping the future of Artificial Intelligence (AI) use as it becomes increasingly prevalent in personal and professional contexts.
The first project, Resilient Intelligence: Addressing Cognitive Vulnerabilities in LLMs, is based in the awareness that Large Language Models (LLMs), as advanced content delivery systems, play an increasingly prominent role in the content delivery systems landscape. Given today’s rapid development in AI across the industry and their underlying technology, the project is a mission to understand how individuals and groups use LLMs, and how these tools are likely to be utilized in the future. “We believe it is essential for identifying and mitigating epistemic blind spots—areas of cognitive or systemic weakness that can distort knowledge or decision-making,” says David Tamez, CCSD managing director a primary investigator on the project.
To evaluate the framework, investigators will use the intelligent sociotechnical systems (iSTS) concept, which emphasizes human-centered joint optimization across four hierarchical levels. This evaluation will involve multiple methodologies tailored to each level: Individual human-AI systems, organizations, ecosystems and social systems.
The second project, Ethical Trust and Decision Support in Human-AI Interaction, is based on the reality that AI systems are increasingly being integrated into high-stakes decision-making contexts, such as healthcare diagnostics and disaster response, as well as into personal relational settings, such as caregiving and companionship. “Trust in these systems is not merely a technical issue but an ethical one, involving key questions of competence, transparency, and accountability,” says Oluwaseun Sanwoolu, graduate research assistant at CCSD and the project’s primary investigator. “Since trust is central to successful human-AI collaboration, trust in AI must be grounded in ethical principles to avoid both blind reliance and unwarranted mistrust.”
Sanwoola’s research investigates the interplay of user preferences, contextual factors, and specificity in fostering trust in human-AI systems. By examining trust in close personal relationships and high-stakes decision-making, the project seeks to balance individual preferences with the need to encourage broader perspectives.
Both projects received funding agreements from Cyber Pack Ventures, a Maryland-based firm that provides technical and managerial consulting services to industry and government in the technical, scientific, procedural, and operational disciplines associated with national security. The CCDS awards are part of their 2024-2025 C3E Cybersecurity Challenge Problem program.
The projects are scheduled to kick off in summer 2025.