Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence
There are a variety of philosophical, theoretical, and applicative questions related to artificial empathy. For example:
People often communicate and make decisions based on inferences about each other's internal states (e.g., emotional, cognitive, and physical states) that are in turn based on signals emitted by the person such as facial expression, body gesture, voice, and words. Broadly speaking, artificial empathy focuses on developing non-human models that achieve similar objectives using similar data.
Artificial empathy has been applied in various research disciplines, including artificial intelligence and business. Two main streams of research in this domain are:
Research on affective computing, such as emotional speech recognition and facial expression detection, falls within the first stream of artificial empathy. Contexts that have been studied include oral interviews,10 call centers,11 human-computer interaction,12 sales pitches,13 and financial reporting.14
The second stream of artificial empathy has been researched more in marketing contexts, such as advertising,15 branding,16 customer reviews,17 in-store recommendation systems,18 movies,19 and online dating.20
With the increasing volume of visual, audio, and text data in commerce, many business applications for artificial empathy have followed. For example, Affectiva21 analyses viewers' facial expressions from video recordings while they are watching video advertisements in order to optimize the content design of video ads. Software like HireVue,22 BarRaiser,23 a hiring intelligence firm, helps firms make recruitment decisions by analyzing audio and video information from candidates' video interviews. Lapetus Solutions24 develops a model to estimate an individual's longevity, health status, and disease susceptibility from a face photo. Their technology has been applied in the insurance industry.25
Although artificial intelligence cannot yet replace social workers themselves, the technology has been deployed in that field. Florida State University published a study about Artificial Intelligence being used in the human services field.26 The research used computer algorithms to analyze health records for combinations of risk factors that could predict a future suicide attempt. The article reports, "machine learning—a future frontier for artificial intelligence—can predict with 80% to 90% accuracy whether someone will attempt suicide as far off as two years into the future. The algorithms become even more accurate as a person's suicide attempt gets closer. For example, the accuracy climbs to 92% one week before a suicide attempt when artificial intelligence focuses on general hospital patients".
Such algorithmic machines can help social workers. Social work operates on a cycle of engagement, assessment, intervention, and evaluation with clients. Earlier assessment for risk of suicide can lead to earlier interventions and prevention, therefore saving lives. The system would learn, analyze, and detect risk factors, alerting the clinician of a patient's suicide risk score (analogous to a patient's cardiovascular risk score). Then, social workers could step in for further assessment and preventive intervention.
Yalçın, Ö.N., DiPaola, S. "Modeling empathy: building a link between affective and cognitive processes." Artificial Intelligence Review 53, 2983–3006 (2020). doi:10.1007/s10462-019-09753-0. /wiki/Doi_(identifier) ↩
Jan-Philipp Stein; Peter Ohler (2017). "Venturing into the uncanny valley of mind—The influence of mind attribution on the acceptance of human-like characters in a virtual reality setting". Cognition. 160: 43–50. doi:10.1016/j.cognition.2016.12.010. ISSN 0010-0277. PMID 28043026. S2CID 2944145. /wiki/Doi_(identifier) ↩
Bert Baumgaertner; Astrid Weiss (26 February 2014). "Do Emotions Matter in the Ethics of Human-Robot Interaction?" (PDF). Artificial Empathy and Companion Robots. European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 288146 ("HOBBIT"); and the Austrian Science Foundation (FWF) under grant agreement T623-N23 ("V4HRC") – via direct download. http://doc.gold.ac.uk/aisb50/AISB50-S19/AISB50-S19-Baumgaertner-paper.pdf ↩
"AI has better 'bedside manner' than some doctors, study finds". The Guardian. 2023-04-28. ISSN 0261-3077. Retrieved 2025-02-17. https://www.theguardian.com/technology/2023/apr/28/ai-has-better-bedside-manner-than-some-doctors-study-finds ↩
Minoru Asada (14 February 2014). "Affective Developmental Robotics" (PDF). How Can We Design the Development of Artificial Empathy?. Osaka, Japan: Dept. of Adaptive Machine Systems, Graduate School of Engineering, Osaka University – via direct download. http://www.macs.hw.ac.uk/~kl360/HRI2014W/submission/S7.pdf ↩
Xiao, L., Kim, H. J., & Ding, M. (2013). "An introduction to audio and visual research and applications in marketing". Review of Marketing Research, 10, p. 244. doi:10.1108/S1548-6435(2013)0000010012. /wiki/Doi_(identifier) ↩
Lim, Angelica; Okuno, Hiroshi G. (2015-02-01). "A Recipe for Empathy". International Journal of Social Robotics. 7 (1): 35–49. doi:10.1007/s12369-014-0262-y. ISSN 1875-4805. /wiki/Angelica_Lim ↩
Hansen, J. H., Kim, W., Rahurkar, M., Ruzanski, E., & Meyerhoff, J. (2011). "Robust emotional stressed speech detection using weighted frequency subbands". EURASIP Journal on Advances in Signal Processing, 2011, 1–10. ↩
Lee, C. M., & Narayanan, S. S. (2005). "Toward detecting emotions in spoken dialogs. IEEE transactions on speech and audio processing, 13(2), 293–303. ↩
Batliner, A., Hacker, C., Steidl, S., Nöth, E., D'Arcy, S., Russell, M. J., & Wong, M. (2004, April). "'You Stupid Tin Box'—Children Interacting with the AIBO Robot: A Cross-linguistic Emotional Speech Corpus". In Lrec. ↩
Allmon, D. E., & Grant, J. (1990). "Real estate sales agents and the code of ethics: A voice stress analysis." Journal of Business Ethics, 9(10), 807–812. ↩
Hobson, J. L., Mayew, W. J., & Venkatachalam, M. (2012). "Analyzing speech to detect financial misreporting." Journal of Accounting Research, 50(2), 349–392. ↩
Xiao, L., & Ding, M. (2014). "Just the faces: Exploring the effects of facial features in print advertising". Marketing Science, 33(3), 338–352. ↩
Netzer, O., Feldman, R., Goldenberg, J., & Fresko, M. (2012). "Mine your own business: Market-structure surveillance through text mining." Marketing Science, 31(3), 521–543. Tirunillai, S., & Tellis, G. J. (2014). "Mining marketing meaning from online chatter: Strategic brand analysis of big data using latent dirichlet allocation." Journal of Marketing Research, 51(4), 463–479. ↩
Büschken, J., & Allenby, G. M. (2016). "Sentence-based text analysis for customer reviews." Marketing Science, 35(6), 953–975. ↩
Lu, S., Xiao, L., & Ding, M. (2016). "A video-based automated recommender (VAR) system for garments." Marketing Science, 35(3), 484-510. ↩
Liu, X., Shi, S. W., Teixeira, T., & Wedel, M. (2018). "Video content marketing: The making of clips." Journal of Marketing, 82(4), 86–101. ↩
Zhou, Yinghui, Shasha Lu, & Min Ding (2020), "Contour-as-Face (CaF) Framework: A Method to Preserve Privacy and Perception", Journal of Marketing Research, forthcoming. ↩
"Affectiva". https://www.affectiva.com/ ↩
"Pre-employment Testing & Video Interviewing Platform". https://www.hirevue.com/ ↩
"Interview Intelligence Software". https://www.barraiser.com/ ↩
"Lapetus Solutions, Inc". https://www.lapetussolutions.com/ ↩
"CHRONOS - Get Started". https://demo.lapetussolutions.com/light/home ↩
Patronis, Amy Farnum (2017-02-28). "How artificial intelligence will save lives in the 21st century". Florida State University News. Retrieved 2022-06-28. https://news.fsu.edu/news/health-medicine/2017/02/28/how-artificial-intelligence-save-lives-21st-century/ ↩