01
🀝
Core Focus
Human-AI Teaming

Human-AI teaming sits at the intersection of human factors, cognitive systems engineering, and AI. We study how humans and AI agents develop shared understanding, coordinate actions, and sustain effective collaboration β€” particularly in dynamic, high-stakes environments where mistakes are costly.

We examine team cognition constructs including shared mental models, transactive memory systems, and situational awareness within mixed human-AI teams, using both controlled laboratory experiments and real-world applied contexts.

Shared mental model development and accuracy in human-AI teams
Transactive memory systems and information distribution across mixed teams
Team cognition under degraded communication and adversarial conditions
Workload balancing and adaptive task allocation
Situation awareness in electronically contested environments
Representative Publications
Schelble, Mallick, & McNeese (2025). A Mixed Methods Approach to Analyzing the Role of AI Teammates in Transition Phases. PACM HCI (GROUP), 2025
Schelble et al. (2021). Let's Think Together! Assessing Shared Mental Models, Performance, and Trust in Human-Agent Teams. PACM HCI (GROUP), 2021 β˜… Best Paper Honorable Mention
McNeese, Schelble et al. (2021). Who/What is My Teammate? Team Composition Considerations in Human-AI Teaming. IEEE Trans. HMS, 2021
Gonzalez, ... Schelble ... & Woolley (2026). Toward a Science of Human–AI Teaming for Decision-Making. PNAS Nexus, 2026
02
πŸ”’
Trust & Ethics
Trustworthy AI

Trust is the currency of effective human-AI collaboration. Our lab investigates how trust develops, breaks, and recovers in human-AI teams β€” with particular attention to the ethical dimensions of AI behavior and their downstream effects on team performance and decision-making.

Trust calibration, repair, and spread in mixed human-AI teams
The role of AI ethics and moral personas on trust and performance
Context-dependent trust: how domain risk and team role shape reliance
Designing transparent and explainable AI teammate behaviors
Trust in ethically charged and adversarial decision environments
Representative Publications
Schelble et al. (2025). Addressing the Role of Context on Trust in Human-AI Teams. Ergonomics, 2025
Zhang, Flathmann, Musick, Schelble et al. (2024). I Know This Looks Bad, But I Can Explain. ACM TIIS, 2024 β˜… Featured Article
Textor, Zhang, Lopez, Schelble et al. (2022). Exploring the Relationship Between Ethics and Trust in Human-AI Teaming. JCEDM, 2022 β˜… Best JCEDM Article
Schelble et al. (2022). Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust, and Performance. Human Factors, 2022
03
πŸ“š
Learning
Training

Effective human-AI teaming begins before deployment. We study how training environments, instructional design, and skill-transfer strategies prepare both humans and AI agents for collaborative performance in operational settings.

Collaborative vs. independent training paradigms for human-AI teams
Psychological fidelity in simulation-based training
Skill transfer from training to operational settings
Intelligent tutoring and adaptive training interventions
Generative AI-powered simulation environments for future teaming
Representative Publications
Flathmann, Schelble, & Galeano (2024). Empirical Impacts of Independent and Collaborative Training on Task Performance. HFES, 2024
Flathmann, Ihekweazu, & Schelble (2025). Leveraging Generative AI to Create Lightweight Simulations for Far-Future Autonomous Teammates. HFES, 2025
04
🎯
Awareness
Situational Awareness

Maintaining shared situational awareness (SA) is critical when humans and AI operate together in dynamic, time-pressured environments. Our research examines how AI teammates can enhance β€” or undermine β€” the team's collective understanding of ongoing situations.

Shared situation awareness frameworks for human-AI teams
AI information-sharing strategies that improve collective SA
Compromised AI detection through SA and mental-model accuracy
Real-time adaptive interfaces for mission-critical environments
Spatial information presentation in distributed multi-agent teams
Representative Publications
Schelble, Mallick, Hauptman, & McNeese (2025). Should AI Teammates Give All the Answers?. IJHCI, 2025
Schelble et al. (2022). I See You: Examining the Role of Spatial Information in Human-Agent Teams. PACM HCI (CSCW), 2022
Schelble et al. (2024). Modeling Perceived Information Needs in Human-AI Teams. Behaviour & Information Technology, 2024
05
πŸ€–
Robotics
Applied Robotics

We extend human-AI teaming research into physical collaborative systems β€” studying how humans interact with autonomous robots in manufacturing, logistics, and emergency-response environments.

Human-robot teaming in advanced and flexible manufacturing
Autonomous ground vehicles in logistics and contested environments
Physical and cognitive trust calibration in embodied AI teammates
Risk-based autonomy policies for collaborative robots
Safety-critical handoffs between human and robotic teammates
Representative Publications
O'Neill, Flathmann, McNeese, Jones, & Schelble (2024). A Comment on 'Can You Outsmart the Robot?'. Academy of Management Discoveries, 2024
Hauptman, Schelble, Flathmann, & McNeese (2024). The Role of Autonomy Levels and Contextual Risk in Designing Safer AI Teammates. IEEE ICHMS, 2024
06
πŸ”¬
Methods
Evaluation & Validation

Rigorous science requires rigorous methods. We develop and apply novel experimental frameworks to test the effectiveness, safety, and reliability of AI teammates in both controlled and naturalistic settings.

Mixed-methods experimental designs for human-AI team research
Behavioral and physiological measures of team performance
Validated survey instruments for trust, SA, and team cognition
Wizard-of-Oz and simulation platforms for AI teammate evaluation
Longitudinal and within-subjects team dynamics studies
Representative Publications
Schelble, Lancaster, Mallick et al. (2024). A Comparative Evaluation of Ad Hoc Team Performance in Modern Collaborative Technology. HFES, 2024
Schelble et al. (2022). Investigating the Effects of Perceived Teammate Artificiality. IJHCI, 2022
Musick, O'Neill, Schelble et al. (2021). What Happens When Humans Believe Their Teammate is an AI?. Computers in Human Behavior, 2021
Methodology

How We Work

ARCS Lab research is fundamentally interdisciplinary and mixed-methods, combining the rigor of experimental science with the depth of qualitative inquiry.

πŸ“Š
Quantitative

Controlled experiments, surveys, behavioral measurement, structural equation modeling, network analysis, and statistical inference to test causal hypotheses about human-AI interaction.

πŸ’¬
Qualitative

Thematic analysis, think-aloud protocols, semi-structured interviews, and expert elicitation to contextualize findings and surface constructs that resist quantification.

βš™οΈ
Computational

Reinforcement learning models, game-theoretic frameworks, agent-based simulation, and generative AI tools to build and test formal models of human-AI collaborative behavior.

Applied Domains

Where We Work

Our research addresses real-world challenges across high-stakes domains where human-AI teaming has the greatest potential impact.

βš”οΈ
Defense & Command

Shared situational awareness in electronically contested environments, AI on the battlefield, and compromised AI mitigation for warfighters.

🏭
Advanced Manufacturing

Collaborative robotics integration, human-cobot workload distribution, and trust in automated manufacturing workflows.

πŸ₯
Healthcare

Ethical AI decision support in clinical settings, adaptive autonomy for healthcare AI teammates, and human factors in medical AI deployment.

πŸ†˜
Emergency Response

Human-robot teaming in search-and-rescue, AI decision support for first responders, and team cognition under high-stress conditions.

πŸ”
Cybersecurity

Human behavior modeling in cybersecurity contexts, cognitive biases in security decision-making, and AI-assisted defense strategies.

πŸ’Ό
Future of Work

AI teammate integration in organizational contexts, workforce augmentation, and the human dimensions of AI-enabled productivity.

βš–οΈ
Responsible AI

AI ethics in practice, responsible deployment frameworks, and the societal implications of human-AI teaming at scale.

🧠
Decision-Making

Human-AI complementarity in complex decisions, cognitive bias mitigation, and advice-taking from AI systems under uncertainty.