The main advantage gained in employing a lazy learning method is that the target function will be approximated locally, such as in the k-nearest neighbor algorithm. Because the target function is approximated locally for each query to the system, lazy learning systems can simultaneously solve multiple problems and deal successfully with changes in the problem domain. At the same time they can reuse a lot of theoretical and applied results from linear regression modelling (notably PRESS statistic) and control.3 It is said that the advantage of this system is achieved if the predictions using a single training set are only developed for few objects.4 This can be demonstrated in the case of the k-NN technique, which is instance-based and function is only estimated locally.56
Theoretical disadvantages with lazy learning include:
There are standard techniques to improve re-computation efficiency so that a particular answer is not recomputed unless the data that impact this answer has changed (e.g., new items, new purchases, new views). In other words, the stored answers are updated incrementally.
This approach, used by large e-commerce or media sites, has long been used in the Entrez portal of the National Center for Biotechnology Information (NCBI) to precompute similarities between the different items in its large datasets: biological sequences, 3-D protein structures, published-article abstracts, etc. Because "find similar" queries are asked so frequently, the NCBI uses highly parallel hardware to perform nightly recomputation. The recomputation is performed only for new entries in the datasets against each other and against existing entries: the similarity between two existing entries need not be recomputed.
Aha, David (29 June 2013). Lazy Learning (illustrated ed.). Springer Science & Business Media, 2013. p. 424. ISBN 978-9401720533. Retrieved 30 September 2021. 978-9401720533 ↩
Tamrakar, Preeti; Roy, Siddharth Singha; Satapathy, Biswajit; Ibrahim, S. P. Syed (2019). Integration of lazy learning associative classification with kNN algorithm. pp. 1–4. doi:10.1109/ViTECoN.2019.8899415. ISBN 978-1-5386-9353-7. 978-1-5386-9353-7 ↩
Bontempi, Gianluca; Birattari, Mauro; Bersini, Hugues (1 January 1999). "Lazy learning for local modelling and control design". International Journal of Control. 72 (7–8): 643–658. doi:10.1080/002071799220830. /wiki/Doi_(identifier) ↩
Sammut, Claude; Webb, Geoffrey I. (2011). Encyclopedia of Machine Learning. New York: Springer Science & Business Media. p. 572. ISBN 9780387307688. 9780387307688 ↩
Pal, Saurabh (2017-11-02). Data Mining Applications. A Comparative Study for Predicting Student's Performance. GRIN Verlag. ISBN 9783668561458. 9783668561458 ↩
Loncarevic, Zvezdan; Simonic, Mihael; Ude, Ales; Gams, Andrej (2022). Combining Reinforcement Learning and Lazy Learning for Faster Few-Shot Transfer Learning. pp. 285–290. doi:10.1109/Humanoids53995.2022.10000095. ISBN 979-8-3503-0979-9. 979-8-3503-0979-9 ↩
Aha, David W. (2013). Lazy Learning. Berlin: Springer Science & Business Media. p. 106. ISBN 9789401720533. 9789401720533 ↩