Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termed human–machine interaction (HMI), man-machine interaction (MMI) or computer-human interaction (CHI). Desktop applications, web browsers, handheld computers, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today.4 Voice user interfaces (VUIs) are used for speech recognition and synthesizing systems, and the emerging multi-modal and Graphical user interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms.
The Association for Computing Machinery (ACM) defines human–computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them".5 A key aspect of HCI is user satisfaction, also referred to as End-User Computing Satisfaction. It goes on to say:
"Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant."6
Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success.
Poorly designed human-machine interfaces can lead to many unexpected problems. A classic example is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster.789 Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instruments or throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout. Thus, the conceptually good idea had unintended results.
Main article: User interface
A human–computer interface can be described as the interface of communication between a human user and a computer. The flow of information between the human and computer is defined as the loop of interaction. The loop of interaction has several aspects to it, including:
Human–computer interaction involves the ways in which humans make—or do not make—use of computational artifacts, systems, and infrastructures. Much of the research in this field seeks to improve the human–computer interaction by improving the usability of computer interfaces.10 How usability is to be precisely understood, how it relates to other social and cultural values, and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.1112
Much of the research in the field of human–computer interaction takes an interest in:
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.
Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software, and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.
The following experimental design principles are considered, when evaluating a current user interface, or designing a new user interface:
The iterative design process is repeated until a sensible, user-friendly interface is created.15
Various strategies delineating methods for human–PC interaction design have developed since the conception of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present-day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrapping user experience around a finished framework.
Displays are human-made artifacts designed to support the perception of relevant system variables and facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e.g., navigating, controlling, decision making, learning, entertaining, etc.). A user or operator must be able to process whatever information a system generates and displays; therefore, the information must be displayed according to principles to support perception, situation awareness, and understanding.
Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering.19
These human perception and information processing principles can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved by utilizing these principles.
Certain principles may not apply to different displays or situations. Some principles may also appear to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design.20
1.Make displays legible (or audible). A display's legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, the operator cannot effectively use them.
2.Avoid absolute judgment limits. Do not ask the user to determine the level of a variable based on a single sensory variable (e.g., color, size, loudness). These sensory variables can contain many possible levels.
3.Top-down processing. Signals are likely perceived and interpreted by what is expected based on a user's experience. If a signal is presented contrary to the user's expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly.
4.Redundancy gain. If a signal is presented more than once, it is more likely to be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g., color and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant.
5.Similarity causes confusion: Use distinguishable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessarily similar features should be removed, and dissimilar features should be highlighted.
6. Principle of pictorial realism. A display should look like the variable that it represents (e.g., the high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like they would in the represented environment.
7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the user's mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude.
8. Minimizing information access cost or interaction cost. When the user's attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost.
9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e.g., proximity, linkage by common colors, patterns, shapes, etc.). However, close display proximity can be harmful by causing too much clutter.
10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information.
11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, memory use may sometimes benefit the user by eliminating the need to reference some knowledge globally (e.g., an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a user's head and knowledge in the world must be balanced for an effective design.
12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the user's mental resources. This will allow the user to focus on current conditions and to consider possible future conditions. An example of a predictive aid is a road sign displaying the distance to a certain destination.
13. Principle of consistency. Old habits from other displays will easily transfer to support the processing of new displays if they are designed consistently. A user's long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.
Topics in human–computer interaction include the following:
Human-AI Interaction explores how users engage with artificial intelligence systems, particularly focusing on usability, trust, and interpretability. The research mainly aims to design AI-driven interfaces that are transparent, explainable, and ethically responsible.21 Studies highlight the importance of explainable AI (XAI) and human-in-the-loop decision-making, ensuring that AI outputs are understandable and trustworthy.22 Researchers also develop design guidelines for human-AI interaction, improving the collaboration between users and AI systems.23
Main article: Augmented reality
Augmented reality (AR) integrates digital content with the real world. It enhances human perception and interaction with physical environments. AR research mainly focuses on adaptive user interfaces, multimodal input techniques, and real-world object interaction.24 Advances in wearable AR technology improve usability, enabling more natural interaction with AR applications.25
Main article: Virtual reality
Virtual reality (VR) creates a fully immersive digital environment, allowing users to interact with computer-generated worlds through sensory input devices. Research focuses on user presence, interaction techniques, and cognitive effects of immersion.26 A key area of study is the impact of VR on cognitive load and user adaptability, influencing how users process information in virtual spaces.27
Main article: Mixed reality
Mixed reality (MR) blends elements of both augmented reality (AR) and virtual reality (VR). It enables real-time interaction with both physical and digital objects. HCI research in MR concentrates on spatial computing, real-world object interaction, and context-aware adaptive interfaces.28 MR technologies are increasingly applied in education, training simulations, and healthcare, enhancing learning outcomes and user engagement.29
Main article: Extended reality
Extended reality (XR) is an umbrella term encompassing AR, VR, and MR, offering a continuum between real and virtual environments. Research investigates user adaptability, interaction paradigms, and ethical implications of immersive technologies.30 Recent studies highlight how AI-driven personalization and adaptive interfaces improve the usability of XR applications.31
Main article: Accessibility
Accessibility in human–computer interaction (HCI) focuses on designing inclusive digital experiences, ensuring usability for people with diverse abilities. Research in this area is related to assistive technologies, adaptive interfaces, and universal design principles.32 Studies indicate that accessible design benefits not only people with disabilities but also enhances usability for all users.33
Main article: Social computing
Social computing is an interactive and collaborative behavior considered between technology and people. In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis, as there are a lot of social computing technologies that include blogs, emails, social networking, quick messaging, and various others. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name.34 Other research finds that individuals perceive their interactions with computers more negatively than humans, despite behaving the same way towards these machines.35
In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors. Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem by solving the semantic ambiguities between the two parties.36
Main articles: Affective computing and Emotion recognition
In the interaction of humans and computers, research has studied how computers can detect, process, and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human–computer interaction. The influence of emotions in human–computer interaction has been studied in fields such as financial decision-making using ECG and organizational knowledge sharing using eye-tracking and face readers as affect-detection channels. In these fields, it has been shown that affect-detection channels have the potential to detect human emotions and those information systems can incorporate the data obtained from affect-detection channels to improve decision models.
Main article: Brain–computer interface
A brain–computer interface (BCI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.37
Security interactions are the study of interaction between humans and computers specifically as it pertains to information security. Its aim, in plain terms, is to improve the usability of security features in end user applications.
Unlike HCI, which has roots in the early days of Xerox PARC during the 1970s, HCISec is a nascent field of study by comparison. Interest in this topic tracks with that of Internet security, which has become an area of broad public concern only in very recent years.
When security features exhibit poor usability, the following are common reasons:
Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated by Douglas Engelbart: "If ease of use were the only valid criterion, people would stick to tricycles and never try bicycles."38
How humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include:
As of 2010[update] the future for HCI is expected39 to include the following characteristics:
One of the main conferences for new research in human–computer interaction is the annually held Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronounced kai, or Khai). CHI is organized by ACM Special Interest Group on Computer-Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners, and industry people, with company sponsors such as Google, Microsoft, and PayPal.
There are also dozens of other smaller, regional, or specialized HCI-related conferences held around the world each year, including:40
Carlisle, James H. (June 1976). "Evaluating the impact of office automation on top management communication". Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS '76. Proceedings of the June 7–10, 1976, National Computer Conference and Exposition. pp. 611–616. doi:10.1145/1499799.1499885. S2CID 18471644. Use of 'human–computer interaction' appears in references /wiki/Doi_(identifier) ↩
Suchman, Lucy (1987). Plans and Situated Action. The Problem of Human-Machine Communication. New York, Cambridge: Cambridge University Press. ISBN 9780521337397. Retrieved 7 March 2015. 9780521337397 ↩
Dourish, Paul (2001). Where the Action Is: The Foundations of Embodied Interaction. Cambridge, MA: MIT Press. ISBN 9780262541787. 9780262541787 ↩
Hewett; Baecker; Card; Carey; Gasen; Mantei; Perlman; Strong; Verplank. "ACM SIGCHI Curricula for Human–Computer Interaction". ACM SIGCHI. Archived from the original on 17 August 2014. Retrieved 15 July 2014. https://web.archive.org/web/20140817165957/http://old.sigchi.org/cdg/cdg2.html#2_1 ↩
Ergoweb. "What is Cognitive Ergonomics?". Ergoweb.com. Archived from the original on September 28, 2011. Retrieved August 29, 2011. https://web.archive.org/web/20110928150026/http://www.ergoweb.com/news/detail.cfm?id=352 ↩
"NRC: Backgrounder on the Three Mile Island Accident". Nrc.gov. Archived from the original on August 24, 2019. Retrieved August 29, 2011. https://www.nrc.gov/reading-rm/doc-collections/fact-sheets/3mile-isle.html ↩
"Report of the President's Commission on the Accident at Three Miles Island" (PDF). 2019-03-14. Archived from the original on 2011-04-09. Retrieved 2011-08-17. https://web.archive.org/web/20110409064628/http://www.threemileisland.org/downloads/188.pdf ↩
Grudin, Jonathan (1992). "Utility and usability: research issues and development contexts". Interacting with Computers. 4 (2): 209–217. doi:10.1016/0953-5438(92)90005-z. /wiki/Doi_(identifier) ↩
Chalmers, Matthew; Galani, Areti (2004). "Seamful interweaving". Proceedings of the 5th conference on Designing interactive systems: Processes, practices, methods, and techniques (PDF). pp. 243–252. doi:10.1145/1013115.1013149. ISBN 978-1581137873. S2CID 12500442. Archived (PDF) from the original on 2020-08-01. Retrieved 2019-10-04. 978-1581137873 ↩
Barkhuus, Louise; Polichar, Valerie E. (2011). "Empowerment through seamfulness: smart phones in everyday life". Personal and Ubiquitous Computing. 15 (6): 629–639. doi:10.1007/s00779-010-0342-4. https://doi.org/10.1007%2Fs00779-010-0342-4 ↩
Rogers, Yvonne (2012). "HCI Theory: Classical, Modern, and Contemporary". Synthesis Lectures on Human-Centered Informatics. 5 (2): 1–129. doi:10.2200/S00418ED1V01Y201205HCI014. /wiki/Doi_(identifier) ↩
Sengers, Phoebe; Boehner, Kirsten; David, Shay; Joseph, Kaye (2005). "Reflective design". Proceedings of the 4th decennial conference on Critical computing: Between sense and sensibility. Vol. 5. pp. 49–58. doi:10.1145/1094562.1094569. ISBN 978-1595932037. S2CID 9029682. 978-1595932037 ↩
Green, Paul (2008). Iterative Design. Lecture presented in Industrial and Operations Engineering 436 (Human Factors in Computer Systems, University of Michigan, Ann Arbor, MI, February 4, 2008. ↩
Kaptelinin, Victor (2012): Activity Theory. In: Soegaard, Mads and Dam, Rikke Friis (eds.). "Encyclopedia of Human–Computer Interaction". The Interaction-Design.org Foundation. Available online at http://www.interaction-design.org/encyclopedia/activity_theory.html Archived 2012-03-23 at the Wayback Machine http://www.interaction-design.org/encyclopedia/activity_theory.html ↩
"The Case for HCI Design Patterns". Archived from the original on 2019-09-28. Retrieved 2019-08-26. https://www.mit.edu/~jtidwell/common_ground_onefile.html ↩
Friedman, B., Kahn Jr, P. H., Borning, A., & Kahn, P. H. (2006). Value Sensitive Design and information systems. Human–Computer Interaction and Management Information Systems: Foundations. ME Sharpe, New York, 348–372. ↩
Wickens, Christopher D., John D. Lee, Yili Liu, and Sallie E. Gordon Becker. An Introduction to Human Factors Engineering. Second ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2004. 185–193. ↩
Brown, C. Marlin. Human–Computer Interface Design Guidelines. Intellect Books, 1998. 2–3. ↩
Shneiderman, Ben (2022). Human-Centered AI. Oxford University Press. ISBN 978-0192845290. 978-0192845290 ↩
Doshi-Velez, Finale; Kim, Been (2017). "Towards a rigorous science of interpretable machine learning". arXiv preprint arXiv:1702.08608. ↩
Amershi, Saleema (2019). "Guidelines for human-AI interaction". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems: 1–13. doi:10.1145/3290605.3300233. /wiki/Doi_(identifier) ↩
Azuma, Ronald T. (1997). "A Survey of Augmented Reality". Presence: Teleoperators & Virtual Environments. 6 (4): 355–385. doi:10.1162/pres.1997.6.4.355. /wiki/Doi_(identifier) ↩
Billinghurst, Mark; Clark, Andrew; Lee, Gun (2015). "A survey of augmented reality". Foundations and Trends in Human-Computer Interaction. 8 (2–3): 73–272. ↩
Slater, Mel (2009). "Place Illusion and Plausibility Can Lead to Realistic Behavior in Immersive Virtual Environments". Philosophical Transactions of the Royal Society B. 364 (1535): 3549–3557. doi:10.1098/rstb.2009.0138. /wiki/Doi_(identifier) ↩
Cummings, James J.; Bailenson, Jeremy N. (2016). "How immersive is enough? A meta-analysis of the effect of immersive technology on user presence". Media Psychology. 19 (2): 272–309. doi:10.1080/15213269.2015.1015740. /wiki/Doi_(identifier) ↩
Milgram, Paul (1999). "Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum". SPIE Proceedings on Telemanipulator and Telepresence Technologies. 2351: 282–292. doi:10.1117/12.197321. /wiki/Doi_(identifier) ↩
Speiginer, Grant (2015). "Mixed reality in education: A review of current and future trends". Educational Technology Research & Development. 63 (6): 855–873. doi:10.1007/s11423-015-9381-7. /wiki/Doi_(identifier) ↩
Milgram, Paul (1994). "A Taxonomy of Mixed Reality Visual Displays". IEICE Transactions on Information and Systems. 77 (12): 1321–1329. ↩
Buhalis, Dimitrios; Karatay, Natali (2022). "Extended reality (XR) and artificial intelligence (AI) revolutionizing the hospitality industry". Journal of Hospitality & Tourism Research. 46 (3): 489–508. doi:10.1177/10963480211037322. /wiki/Doi_(identifier) ↩
Lazar, Jonathan (2017). Research Methods in Human-Computer Interaction. Morgan Kaufmann. ISBN 978-0128053904. 978-0128053904 ↩
Shinohara, Kristen; Wobbrock, Jacob O. (2011). "In the shadow of misperception: Assistive technology use and social interactions". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: 705–714. doi:10.1145/1978942.1979044. /wiki/Doi_(identifier) ↩
Posard, Marek (2014). "Status processes in human–computer interactions: Does gender matter?". Computers in Human Behavior. 37 (37): 189–195. doi:10.1016/j.chb.2014.04.025. /wiki/Doi_(identifier) ↩
Posard, Marek; Rinderknecht, R. Gordon (2015). "Do people like working with computers more than human beings?". Computers in Human Behavior. 51: 232–238. doi:10.1016/j.chb.2015.04.057. https://doi.org/10.1016%2Fj.chb.2015.04.057 ↩
Dong, Hai; Hussain, Farookh; Elizabeth, Chang (2010). "A human-centered semantic service platform for the digital ecosystems environment". World Wide Web. 13 (1–2): 75–103. doi:10.1007/s11280-009-0081-5. hdl:20.500.11937/29660. S2CID 10746264. https://www.researchgate.net/publication/220301904 ↩
Krucoff, Max O.; Rahimpour, Shervin; Slutzky, Marc W.; Edgerton, V. Reggie; Turner, Dennis A. (2016-01-01). "Enhancing Nervous System Recovery through Neurobiologics, Neural Interface Training, and Neurorehabilitation". Frontiers in Neuroscience. 10: 584. doi:10.3389/fnins.2016.00584. PMC 5186786. PMID 28082858. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5186786 ↩
Fischer, Gerhard (1 May 2000). "User Modeling in Human–Computer Interaction". User Modeling and User-Adapted Interaction. 11 (1–2): 65–86. doi:10.1023/A:1011145532042. https://doi.org/10.1023%2FA%3A1011145532042 ↩
SINHA, Gaurav; SHAHI, Rahul; SHANKAR, Mani. Human–Computer Interaction. In: Emerging Trends in Engineering and Technology (ICETET), 2010 3rd International Conference on. IEEE, 2010. p. 1–4. ↩
"Conference Search: hci". www.confsearch.org. Archived from the original on 2009-08-20. Retrieved 2009-05-15. http://www.confsearch.org/confsearch/faces/pages/topic.jsp?topic=hci&sortMode=1&graphicView=true ↩