{"id":891,"date":"2024-05-15T08:32:26","date_gmt":"2024-05-15T15:32:26","guid":{"rendered":"https:\/\/labs.engineering.asu.edu\/adapt\/?page_id=891"},"modified":"2025-07-18T13:47:10","modified_gmt":"2025-07-18T20:47:10","slug":"research","status":"publish","type":"page","link":"https:\/\/labs.engineering.asu.edu\/adapt\/research\/","title":{"rendered":"Research"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p>We conduct laboratory and field-based research, apply systems thinking to human-automation integration problems, assess performance impacts between human-machine dyads, and use mixed methods to improve technology integration in high-criticality work settings. Advances in automation have led to increasingly capable machines, from adaptive algorithms to embodied social agents. Instead of operating remotely or autonomously in well-controlled environments, new automation is moving into our more unpredictable human world. These changes can shift system goals from reliability to resilience. We take &#8220;resilience&#8221; to mean the sustained ability of a system to adapt to future surprises as conditions evolve. Our findings indicate the importance of considering social exchange factors in human-machine systems and the need for human-agent cooperation to support system resilience. Below are select publications that highlight some of the contributions in each topic area. For a more complete catalog of our research, please visit&nbsp;<a href=\"https:\/\/scholar.google.com\/citations?user=eH3YWtEAAAAJ&amp;hl=en\">Google Scholar<\/a>.<\/p>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Current Projects<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong>Human-agent Cooperation to Support System Resilience<\/strong><\/h3>\n\n\n\n<p>From self-driving vehicles to sophisticated decision support systems automation is being built to be increasingly autonomous. Computational advances have led to machines capable of automating whole functions that previously required human operation or intervention at various steps along the way. As such, workers and machines are entering into increasingly coordinative and cooperative relationships, in contexts where successful interactions demand teamwork. Resilience Engineering approaches the design of such systems as consisting of interactive agents (human or machine), collaborating to achieve shared goals in dynamic and high-criticality or safety-sensitive environments. Recent projects study human-agent cooperation on dynamic and uncertain tasks, using simulated microworld environments.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Chiou, E. K., &amp; Lee, J. R. (2025, March 18). <em>Meaningful control in human-AI systems: Trusting AI agents<\/em>. NATO STO Symposium (HFM-RSY-377) on Meaningful Human Control in Information Warfare, Amsterdam, The Netherlands. <a href=\"https:\/\/doi.org\/10.14339\/STO-MP-HFM-377\">https:\/\/doi.org\/10.14339\/STO-MP-HFM-377<\/a><\/li>\n\n\n\n<li>Chiou, E. K. (2024). Failing to Grasp our Failure to Grasp Automation Failure. <em>Journal of Cognitive Engineering and Decision Making<\/em>, 15553434241228799. <a href=\"https:\/\/doi.org\/10\/gthgvs\">https:\/\/doi.org\/10\/gthgvs<\/a><\/li>\n\n\n\n<li>Chiou, E. K., &amp; Lee, J. D. (2016). Cooperation in human-agent systems to support resilience: A microworld experiment. <em>Human Factors: The Journal of the Human Factors and Ergonomics Society<\/em>, <em>58<\/em>(6), 846\u2013863. <a href=\"https:\/\/doi.org\/10\/f82nh8\">https:\/\/doi.org\/10\/f82nh8<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-style-uds-figure\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"405\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2019\/08\/Figure23.png\" alt=\"\" class=\"wp-image-668\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2019\/08\/Figure23.png 720w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2019\/08\/Figure23-500x281.png 500w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><figcaption class=\"wp-element-caption\"><em>Image description: Diagrammatic depiction of the differences between technology-centered, human-centered, and relational approaches to human-agent interactions<\/em>. <\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-style-plain\"><img loading=\"lazy\" decoding=\"async\" width=\"1961\" height=\"627\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/lightbulb-responsibity.png\" alt=\"\" class=\"wp-image-979\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/lightbulb-responsibity.png 1961w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/lightbulb-responsibity-500x160.png 500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/lightbulb-responsibity-1500x480.png 1500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/lightbulb-responsibity-1000x320.png 1000w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/lightbulb-responsibity-1536x491.png 1536w\" sizes=\"auto, (max-width: 1961px) 100vw, 1961px\" \/><figcaption class=\"wp-element-caption\"><em><sub>Image description: A visual example showing that when we transition from a light switch to a motion sensor light, the complexity of the task context increases. It is a metaphor to demonstrate a current gap in test and evaluation of modern AI-enabled systems.<\/sub><\/em><\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong>AI-enabled Decision Support Systems<\/strong><\/h3>\n\n\n\n<p>Artificial intelligence (AI) technologies are being deployed in national security, defense, and criminal justice. Previous work has shown that some of these technologies should not be fully automated in tasks with ethical and legal implications. AI should therefore defer decisions to a human when it cannot predict accurately or fairly. The goal of understanding human-AI decision-making is to balance the benefits of AI outputs with the benefits of human intelligence and discernment. Recent projects have studied AI-enabled face recognition system that under certain conditions defers identification of&nbsp;travelers to a human security officer. Ongoing projects are investigating the use of LLMs in intelligence analysis tasks. Our goal is to understand the factors that affect overall system performance and the broader social impact of such systems.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cohen, M. C., Kim, N., Ba, Y., Pan, A., Bhatti, S., Salehi, P., Sung, J., Blasch, E., Mancenido, M. V., &amp; Chiou, E. K. (2025). PADTHAI-MM: Principles-based approach for designing trustworthy, human-centered AI using the MAST methodology. <em>AI Magazine<\/em>, <em>46<\/em>(1), e70000. <a href=\"https:\/\/doi.org\/10\/g9fm69\">https:\/\/doi.org\/10\/g9fm69<\/a><\/li>\n\n\n\n<li>Gr\u00f6ner, F., &amp; Chiou, E. K. (2024). Investigating the Impact of User Interface Designs on Expectations About Large Language Models\u2019 Capabilities. <em>Proceedings of the Human Factors and Ergonomics Society Annual Meeting<\/em>, <em>68<\/em>(1), 155\u2013161. <a href=\"https:\/\/doi.org\/10\/g9d7t9\">https:\/\/doi.org\/10\/g9d7t9<\/a><br>Salehi, P., Ba, Y., Kim, N., Mosallanezhad, A., Pan, A., Cohen, M. C., Wang, Y., Zhao, J., Bhatti, S., Sung, J., Blasch, E., Mancenido, M. V., &amp; Chiou, E. K. (2024). Towards trustworthy AI-enabled decision support systems: Validation of the Multisource AI Scorecard Table (MAST). <em>Journal of Artificial Intelligence Research<\/em>, <em>80<\/em>, 1311\u20131341. <a href=\"https:\/\/doi.org\/10\/gt64fn\">https:\/\/doi.org\/10\/gt64fn<\/a><\/li>\n\n\n\n<li>Zhao, J., Wang, Y., Mancenido, M. V., Chiou, E. K., &amp; Maciejewski, R. (2024). Evaluating the Impact of Uncertainty Visualization on Model Reliance. <em>IEEE Transactions on Visualization and Computer Graphics<\/em>, <em>30<\/em>(7), 4093\u20134107. IEEE Transactions on Visualization and Computer Graphics. <a href=\"https:\/\/doi.org\/10\/gscn6b\">https:\/\/doi.org\/10\/gscn6b<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large is-style-uds-figure\"><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"792\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/READIT-HM-Screenshot-1500x792.png\" alt=\"\" class=\"wp-image-973\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/READIT-HM-Screenshot-1500x792.png 1500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/READIT-HM-Screenshot-500x264.png 500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/READIT-HM-Screenshot-1000x528.png 1000w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/READIT-HM-Screenshot-1536x811.png 1536w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/READIT-HM-Screenshot.png 1920w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><\/figure>\n\n\n\n<p><sup><em>Image description: The Reporting Assistant for Defense and Intelligence Tasks (READIT) testbed interface shows topic clusters, a topic similarity matrix, a topic timeline, document summaries, and topic filtering<\/em>.<\/sup><\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-style-plain\" style=\"margin-top:var(--wp--preset--spacing--uds-size-1);margin-right:var(--wp--preset--spacing--uds-size-1);margin-bottom:var(--wp--preset--spacing--uds-size-1);margin-left:var(--wp--preset--spacing--uds-size-1)\"><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"617\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/MASTOPIA-SysArchitecture-Sept2024-1500x617.png\" alt=\"\" class=\"wp-image-974\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/MASTOPIA-SysArchitecture-Sept2024-1500x617.png 1500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/MASTOPIA-SysArchitecture-Sept2024-500x206.png 500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/MASTOPIA-SysArchitecture-Sept2024-1000x411.png 1000w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/MASTOPIA-SysArchitecture-Sept2024-1536x631.png 1536w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/MASTOPIA-SysArchitecture-Sept2024-2048x842.png 2048w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><figcaption class=\"wp-element-caption\"><em><sub>Image description: A system architecture diagram for the MAST Optimized Prompting for Intelligence &amp; Analysis (MASTOPIA) testbed shows how we integrated prompt engineering with a large language model (LLM) and retrieval augmented generation (RAG<\/sub><\/em><sub>).<\/sub><\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong>Trusting Automation<\/strong><\/h3>\n\n\n\n<p>Trust in automation has emerged as a concern in wide-ranging domains, including automotive research, virtual agents, healthcare, and military applications. Past research focused on appropriately calibrated trust, reliance and compliance, in line with system capabilities. As our relationship with automation shifts from supervisory to collaborative, understanding trust in automation is important in situations where cooperation between human-agent team members is needed. In such cases, separating appropriate distrust and inappropriate suspicion depends on social affordances including signals of a shared purpose, rather than more narrow perceptions of reliability or dependability. Recent projects study trust in virtual humans and automated agents in the domains of education, healthcare, and national security.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Goldshtein, M., Schroeder, N. L., &amp; Chiou, E. K. (2025). The role of learner trust in generative artificially intelligent learning environments. <em>Journal of Engineering Education<\/em>, <em>114<\/em>(2), e70000. <a href=\"https:\/\/doi.org\/10\/g9fm68\">https:\/\/doi.org\/10\/g9fm68<\/a><\/li>\n\n\n\n<li>Alsaid, A., Li, M., Chiou, E. K., &amp; Lee, J. D. (2023). Measuring trust: A text analysis approach to compare, contrast, and select trust questionnaires. <em>Frontiers in Psychology<\/em>, <em>14<\/em>, 1192020. <a href=\"https:\/\/doi.org\/10\/gs53sq\">https:\/\/doi.org\/10\/gs53sq<\/a><\/li>\n\n\n\n<li>Chiou, E. K., &amp; Lee, J. D. (2021). Trusting automation: Designing for responsivity and resilience. <em>Human Factors<\/em>. <a href=\"https:\/\/doi.org\/10\/gjvcr2\">https:\/\/doi.org\/10\/gjvcr2<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-style-uds-figure\"><img loading=\"lazy\" decoding=\"async\" width=\"731\" height=\"487\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Screenshot-2025-07-15-at-5.21.16\u202fPM.png\" alt=\"\" class=\"wp-image-970\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Screenshot-2025-07-15-at-5.21.16\u202fPM.png 731w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Screenshot-2025-07-15-at-5.21.16\u202fPM-500x333.png 500w\" sizes=\"auto, (max-width: 731px) 100vw, 731px\" \/><\/figure>\n\n\n\n<p><em><sub>Image description: A diagram that visualizes the relationship between four relational concepts (Situation, Strategy, Semiotics, and Sequence) and how together they can comprise the trusting automation process described in Chiou &amp; Lee, 2021.<\/sub><\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong>Accountability in Sociotechnical Systems<\/strong><\/h3>\n\n\n\n<p>Accountability refers to the pressure to attend to more information, and employ multi-dimensional information processing strategies to identify appropriate responses. The goal of understanding accountability in sociotechnical systems is to balance the benefits of autonomy with the benefits of control in system design and performance. This research program impacts the future of work and human empowerment. Recent projects study accountability in increasingly automated or proceduralized environments in the domains of homeland security and healthcare.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Mueller, B., &amp; Chiou, E. K. (2025). A work system team perspective for AI in higher education. <em>Communication Education<\/em>, <em>74<\/em>(1), 84\u2013103. <a href=\"https:\/\/doi.org\/10\/g9fm7b\">https:\/\/doi.org\/10\/g9fm7b<\/a><\/li>\n\n\n\n<li>Salehi, P., &amp; Chiou, E. K. (2020). Considering a meso-ergonomic factor: Can accountability reduce errors? <em>Proceedings of the 64th Human Factors and Ergonomics Society Annual Meeting<\/em>, 288\u2013292. <a href=\"https:\/\/doi.org\/10\/gjvcr3\">https:\/\/doi.org\/10\/gjvcr3<\/a><\/li>\n\n\n\n<li>Salehi, P., Chiou, E. K., &amp; Wilkins, A. (2018). Human-agent interactions: Does accountability matter in interactive control automation? <em>Proceedings of the 62nd Human Factors and Ergonomics Society Annual Meeting<\/em>, 1643\u20131647. <a href=\"https:\/\/doi.org\/10\/ggwkj7\">https:\/\/doi.org\/10\/ggwkj7<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-style-uds-figure\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"405\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2019\/08\/2019_AHFE_Chiou.png\" alt=\"\" class=\"wp-image-647\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2019\/08\/2019_AHFE_Chiou.png 720w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2019\/08\/2019_AHFE_Chiou-500x281.png 500w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><\/figure>\n\n\n\n<p><em><sub>Image description: A figure that shows two accountability conditions, a condition in which study participants are told that their actions are unable to be logged by a computer system, and a condition in which study participants are told that they will have to justify their actions.<\/sub><\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Past Projects<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-resized is-style-plain\"><img loading=\"lazy\" decoding=\"async\" width=\"1311\" height=\"1312\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Affinity-Diagram-edited.jpg\" alt=\"\" class=\"wp-image-966\" style=\"aspect-ratio:1.5;object-fit:cover;width:563px;height:auto\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Affinity-Diagram-edited.jpg 1311w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Affinity-Diagram-edited-500x500.jpg 500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Affinity-Diagram-edited-150x150.jpg 150w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Affinity-Diagram-edited-1000x1000.jpg 1000w\" sizes=\"auto, (max-width: 1311px) 100vw, 1311px\" \/><figcaption class=\"wp-element-caption\"><sub><em>Image description: Photo of an affinity diagram comprised of colorful, annotated sticky notes on a white wall.<\/em><\/sub><\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong>Contextual Design in Healthcare and Transportation&nbsp;<\/strong><\/h3>\n\n\n\n<p>With the&nbsp;<a href=\"http:\/\/csl.engr.wisc.edu\/\">Cognitive Systems Laboratory<\/a>&nbsp;(PI John D. Lee), Contextual Design methods were used to develop information management and decision-making devices that aid older adults in daily activities such as medication management and transportation. Both projects were conducted in collaboration with&nbsp;<a href=\"https:\/\/chess.wisc.edu\/chess\/home\/home.aspx\">The Center for Health Enhancement Systems Studies<\/a>.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Payyanadan, R. P., Gibson, M., Chiou, E., Ghazizadeh, M., &amp; Lee, J. D. (2017). Contextual Design for driving: Developing a trip-planning tool for older adults. <em>Transportation Research Part F: Traffic Psychology and Behaviour<\/em>, <em>46<\/em>, 462\u2013476. <a href=\"https:\/\/doi.org\/10\/gbjrrn\">https:\/\/doi.org\/10\/gbjrrn<\/a><\/li>\n\n\n\n<li>Chiou, E., Venkatraman, V., Larson, K., Li, Y., Gibson, M., &amp; Lee, J. D. (2014). Contextual design of a motivated medication management device. <em>Ergonomics in Design: The Quarterly of Human Factors Applications<\/em>, <em>22<\/em>(1), 8\u201315. <a href=\"https:\/\/doi.org\/10\/ghzbdw\">https:\/\/doi.org\/10\/ghzbdw<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-resized is-style-plain\"><img loading=\"lazy\" decoding=\"async\" width=\"950\" height=\"695\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Gives-you-a-Doctor.png\" alt=\"\" class=\"wp-image-965\" style=\"width:570px;height:auto\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Gives-you-a-Doctor.png 950w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/Gives-you-a-Doctor-500x366.png 500w\" sizes=\"auto, (max-width: 950px) 100vw, 950px\" \/><figcaption class=\"wp-element-caption\"><em><sub>Image description: Screenshot of a hospital scheduling micro-world&nbsp;environment used to study trust, human performance, and the effects of different algorithmic strategies on human performance.<\/sub><\/em><\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong><strong>Social Exchange Factors in Human-Automation Interaction<\/strong><\/strong><\/h3>\n\n\n\n<p>Social Exchange Theory treats interactions between agents as transactions similar to exchanges of goods. These transactions communicate actions that shape perceptions of an exchange partner and interpretation of the partner\u2019s subsequent actions. This has important implications for teams of people and automated agents in complex work environments, where shared goals may not be attained without first reconciling potentially conflicting local goals. In such environments, interaction structures may inadvertently impact interpretation of intent and subsequent actions, affecting system performance. <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Li, J., Dong, S., Chiou, E. K., &amp; Xu, J. (2020). Reciprocity and its neurological correlates in human-agent cooperation. <em>IEEE Transactions on Human-Machine Systems<\/em>, <em>50<\/em>(5), 384\u2013394. <a href=\"https:\/\/doi.org\/10\/ghzbvt\">https:\/\/doi.org\/10\/ghzbvt<\/a><\/li>\n\n\n\n<li>Chiou, E. K., Lee, J. D., &amp; Su, T. (2019). Negotiated and reciprocal exchange structures in human-agent cooperation. <em>Computers in Human Behavior<\/em>, <em>90<\/em>, 288\u2013297. <a href=\"https:\/\/doi.org\/10\/ggkj8f\">https:\/\/doi.org\/10\/ggkj8f<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-resized is-style-plain\"><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"801\" src=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/image-from-rawpixel-id-9657895-jpeg.jpg\" alt=\"\" class=\"wp-image-969\" style=\"aspect-ratio:1.5;object-fit:cover;width:571px;height:auto\" srcset=\"https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/image-from-rawpixel-id-9657895-jpeg.jpg 1200w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/image-from-rawpixel-id-9657895-jpeg-500x334.jpg 500w, https:\/\/labs.engineering.asu.edu\/adapt\/wp-content\/uploads\/sites\/192\/2025\/07\/image-from-rawpixel-id-9657895-jpeg-1000x668.jpg 1000w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption class=\"wp-element-caption\"><em><sub>Image description: Photo of the inside of a health clinic patient examination room shows an exam chair, wash station, cabinets, and stool. Original public domain image from Flickr. <\/sub><\/em><\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\"><strong>Ethnographic Field Studies in Healthcare<\/strong><\/h3>\n\n\n\n<p class=\"is-style-default\" style=\"margin-right:0;margin-left:0\">With the Human Computer Interaction (HCI) Lab at UW-Madison (PI: Enid Montague), projects related to consumer health IT, trust in smart medical devices, and electronic health records were conducted. In the summer of 2012, a field research project with&nbsp;<a href=\"https:\/\/med.nyu.edu\/departments-institutes\/population-health\/\">NYU collaborators<\/a>&nbsp;explored design guidelines for a shared patient-provider decision aid. <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Asan, O., Chiou, E., &amp; Montague, E. (2015). Quantitative ethnographic study of physician workflow and interactions with electronic health record systems. <em>International Journal of Industrial Ergonomics<\/em>, <em>49<\/em>, 124\u2013130. <a href=\"https:\/\/doi.org\/10\/ghzbbq\">https:\/\/doi.org\/10\/ghzbbq<\/a><\/li>\n\n\n\n<li>Hajizadeh, N., Figueroa, R. E. P., Uhler, L. M., Chiou, E., Perchonok, J. E., &amp; Montague, E. (2014). Identifying design considerations for a shared decision aid for use at the point of outpatient clinical care: An ethnographic study at an inner city clinic. <em>Journal of Participatory Medicine<\/em>, <em>5<\/em>, 12.<\/li>\n\n\n\n<li>Montague, E., Xu, J., &amp; Chiou, E. (2014). Shared experiences of technology and trust: An experimental study of physiological compliance between active and passive users in technology-mediated collaborative encounters. <em>IEEE Transactions on Human-Machine Systems<\/em>, <em>44<\/em>(5), 614\u2013624. <a href=\"https:\/\/doi.org\/10\/f6hs5c\">https:\/\/doi.org\/10\/f6hs5c<\/a><\/li>\n\n\n\n<li>Montague, E., Asan, O., &amp; Chiou, E. (2013). Organizational and technological correlates of nurses\u2019 trust in a smart intravenous pump. <em>CIN: Computers, Informatics, Nursing<\/em>, <em>31<\/em>(3), 142\u2013149. <a href=\"https:\/\/doi.org\/10\/f4r4gm\">https:\/\/doi.org\/10\/f4r4gm<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p class=\"mb-2\">We conduct laboratory and field-based research, apply systems thinking to human-automation integration problems, assess performance impacts between human-machine dyads, and use mixed methods to improve technology integration in high-criticality work settings. Advances in automation have led to increasingly capable machines, from adaptive algorithms to embodied social agents. Instead of operating remotely or autonomously in well-controlled&#8230;<\/p>\n","protected":false},"author":338,"featured_media":0,"parent":0,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":"[]"},"class_list":["post-891","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/pages\/891","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/users\/338"}],"replies":[{"embeddable":true,"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/comments?post=891"}],"version-history":[{"count":0,"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/pages\/891\/revisions"}],"wp:attachment":[{"href":"https:\/\/labs.engineering.asu.edu\/adapt\/wp-json\/wp\/v2\/media?parent=891"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}