Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.
Common distribution shifts are classified as follows:34
Domain adaptation problems typically assume that some data from the target domain is available during training. Problems can be classified according to the type of this available data:56
Let X {\displaystyle X} be the input space (or description space) and let Y {\displaystyle Y} be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) h : X → Y {\displaystyle h:X\to Y} able to attach a label from Y {\displaystyle Y} to an example from X {\displaystyle X} . This model is learned from a learning sample S = { ( x i , y i ) ∈ ( X × Y ) } i = 1 m {\displaystyle S=\{(x_{i},y_{i})\in (X\times Y)\}_{i=1}^{m}} .
Usually in supervised learning (without domain adaptation), we suppose that the examples ( x i , y i ) ∈ S {\displaystyle (x_{i},y_{i})\in S} are drawn i.i.d. from a distribution D S {\displaystyle D_{S}} of support X × Y {\displaystyle X\times Y} (unknown and fixed). The objective is then to learn h {\displaystyle h} (from S {\displaystyle S} ) such that it commits the least error possible for labelling new examples coming from the distribution D S {\displaystyle D_{S}} .
The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions D S {\displaystyle D_{S}} and D T {\displaystyle D_{T}} on X × Y {\displaystyle X\times Y} . The domain adaptation task then consists of the transfer of knowledge from the source domain D S {\displaystyle D_{S}} to the target one D T {\displaystyle D_{T}} . The goal is then to learn h {\displaystyle h} (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain D T {\displaystyle D_{T}} .
The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?
The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).78
A method for adapting consists in iteratively "auto-labeling" the target examples.9 The principle is simple:
Note that there exist other iterative approaches, but they usually need target labeled examples.1011
The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.1213
The goal is to construct a Bayesian hierarchical model p ( n ) {\displaystyle p(n)} , which is essentially a factorization model for counts n {\displaystyle n} , to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.14
Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:
Crammer, Koby; Kearns, Michael; Wortman, Jeniifer (2008). "Learning from Multiple Sources" (PDF). Journal of Machine Learning Research. 9: 1757–1774. http://www.jmlr.org/papers/volume9/crammer08a/crammer08a.pdf ↩
Sun, Shiliang; Shi, Honglei; Wu, Yuanbin (July 2015). "A survey of multi-source domain adaptation". Information Fusion. 24: 84–92. doi:10.1016/j.inffus.2014.12.003. S2CID 18385140. /wiki/Doi_(identifier) ↩
Kouw, Wouter M.; Loog, Marco (2019-01-14), An introduction to domain adaptation and transfer learning, doi:10.48550/arXiv.1812.11806, retrieved 2024-12-22 https://arxiv.org/abs/1812.11806 ↩
Farahani, Abolfazl; Voghoei, Sahar; Rasheed, Khaled; Arabnia, Hamid R. (2020-10-07), A Brief Review of Domain Adaptation, doi:10.48550/arXiv.2010.03978, retrieved 2024-12-23 https://arxiv.org/abs/2010.03978 ↩
Stanford Online (2023-04-11). Stanford CS330 Deep Multi-Task & Meta Learning - Domain Adaptation l 2022 I Lecture 13. Retrieved 2024-12-23 – via YouTube. https://www.youtube.com/watch?v=Uk6MU_PLDMs ↩
Huang, Jiayuan; Smola, Alexander J.; Gretton, Arthur; Borgwardt, Karster M.; Schölkopf, Bernhard (2006). "Correcting Sample Selection Bias by Unlabeled Data" (PDF). Conference on Neural Information Processing Systems (NIPS). pp. 601–608. http://papers.nips.cc/paper/3075-correcting-sample-selection-bias-by-unlabeled-data.pdf ↩
Shimodaira, Hidetoshi (2000). "Improving predictive inference under covariate shift by weighting the log-likelihood function". Journal of Statistical Planning and Inference. 90 (2): 227–244. doi:10.1016/S0378-3758(00)00115-4. S2CID 9238949. https://www.researchgate.net/publication/230710850 ↩
Gallego, A.J.; Calvo-Zaragoza, J.; Fisher, R.B. (2020). "Incremental Unsupervised Domain-Adversarial Training of Neural Networks" (PDF). IEEE Transactions on Neural Networks and Learning Systems. PP (11): 4864–4878. doi:10.1109/TNNLS.2020.3025954. hdl:20.500.11820/72ba0443-8a7d-4cdd-8212-38682d4f0730. PMID 33027004. S2CID 210164756. https://www.pure.ed.ac.uk/ws/files/172035660/Incremental_Unsupervised_GALLEGO_DOA18092020_AFV.pdf ↩
Arief-Ang, I.B.; Salim, F.D.; Hamilton, M. (2017-11-08). DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data. 4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys). Delft, Netherlands. pp. 1–10. doi:10.1145/3137133.3137146. ISBN 978-1-4503-5544-5. 978-1-4503-5544-5 ↩
Arief-Ang, I.B.; Hamilton, M.; Salim, F.D. (2018-12-01). "A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO2 Sensor Data". ACM Transactions on Sensor Networks. 14 (3–4): 21:1–21:28. doi:10.1145/3217214. S2CID 54066723. /wiki/Doi_(identifier) ↩
Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor (2016). "Domain-Adversarial Training of Neural Networks" (PDF). Journal of Machine Learning Research. 17: 1–35. http://jmlr.org/papers/volume17/15-239/15-239.pdf ↩
Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2017). "Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation". arXiv:1703.01461 [cs.RO]. /wiki/ArXiv_(identifier) ↩
Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2018). "Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data". arXiv:1810.09433 [stat.ML]. /wiki/ArXiv_(identifier) ↩
Gnassounou, Théo and Kachaiev, Oleksii and Flamary, Rémi and Collas, Antoine and Lalou, Yanis and de Mathelin, Antoine and Gramfort, Alexandre and Bueno, Ruben and Michel, Florent and Mellot, Apolline and Loison, Virginie and Odonnat, Ambroise and Moreau, Thomas (2024) "SKADA : Scikit Adaptation" https://github.com/scikit-adaptation/skada ↩
de Mathelin, Antoine and Deheeger, François and Richard, Guillaume and Mougeot, Mathilde and Vayatis, Nicolas (2020) "ADAPT: Awesome Domain Adaptation Python Toolbox" https://github.com/adapt-python/adapt ↩
Mingsheng Long Junguang Jiang, Bo Fu. (2020) "Transfer-learning-library" https://github.com/thuml/Transfer-Learning-Library ↩
Ke Yan. (2016) "Domain adaptation toolbox" https://github.com/viggin/domain-adaptation-toolbox ↩