We use rigorous mixed-methods inquiry to understand and improve how humans and AI systems collaborate β building the scientific foundation for trustworthy, high-performing human-AI teams across mission-critical domains.
Human-AI teaming sits at the intersection of human factors, cognitive systems engineering, and AI. We study how humans and AI agents develop shared understanding, coordinate actions, and sustain effective collaboration β particularly in dynamic, high-stakes environments where mistakes are costly.
We examine team cognition constructs including shared mental models, transactive memory systems, and situational awareness within mixed human-AI teams, using both controlled laboratory experiments and real-world applied contexts.
Trust is the currency of effective human-AI collaboration. Our lab investigates how trust develops, breaks, and recovers in human-AI teams β with particular attention to the ethical dimensions of AI behavior and their downstream effects on team performance and decision-making.
Effective human-AI teaming begins before deployment. We study how training environments, instructional design, and skill-transfer strategies prepare both humans and AI agents for collaborative performance in operational settings.
Maintaining shared situational awareness (SA) is critical when humans and AI operate together in dynamic, time-pressured environments. Our research examines how AI teammates can enhance β or undermine β the team's collective understanding of ongoing situations.
We extend human-AI teaming research into physical collaborative systems β studying how humans interact with autonomous robots in manufacturing, logistics, and emergency-response environments.
Rigorous science requires rigorous methods. We develop and apply novel experimental frameworks to test the effectiveness, safety, and reliability of AI teammates in both controlled and naturalistic settings.
ARCS Lab research is fundamentally interdisciplinary and mixed-methods, combining the rigor of experimental science with the depth of qualitative inquiry.
Controlled experiments, surveys, behavioral measurement, structural equation modeling, network analysis, and statistical inference to test causal hypotheses about human-AI interaction.
Thematic analysis, think-aloud protocols, semi-structured interviews, and expert elicitation to contextualize findings and surface constructs that resist quantification.
Reinforcement learning models, game-theoretic frameworks, agent-based simulation, and generative AI tools to build and test formal models of human-AI collaborative behavior.
Our research addresses real-world challenges across high-stakes domains where human-AI teaming has the greatest potential impact.
Shared situational awareness in electronically contested environments, AI on the battlefield, and compromised AI mitigation for warfighters.
Collaborative robotics integration, human-cobot workload distribution, and trust in automated manufacturing workflows.
Ethical AI decision support in clinical settings, adaptive autonomy for healthcare AI teammates, and human factors in medical AI deployment.
Human-robot teaming in search-and-rescue, AI decision support for first responders, and team cognition under high-stress conditions.
Human behavior modeling in cybersecurity contexts, cognitive biases in security decision-making, and AI-assisted defense strategies.
AI teammate integration in organizational contexts, workforce augmentation, and the human dimensions of AI-enabled productivity.
AI ethics in practice, responsible deployment frameworks, and the societal implications of human-AI teaming at scale.
Human-AI complementarity in complex decisions, cognitive bias mitigation, and advice-taking from AI systems under uncertainty.