An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevance.
An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. This ranking of results is a key difference of information retrieval searching compared to database searching.2
Depending on the application the data objects may be, for example, text documents, images,3 audio,4 mind maps5 or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.
Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.6
there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute— J. E. Holmstrom, 1948
there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute
The idea of using computers to search for relevant pieces of information was popularized in the article As We May Think by Vannevar Bush in 1945.7 It would appear that Bush was inspired by patents for a 'statistical machine' – filed by Emanuel Goldberg in the 1920s and 1930s – that searched for documents stored on film.8 The first description of a computer searching for information was described by Holmstrom in 1948,9 detailing an early mention of the Univac computer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedy Desk Set. In the 1960s, the first large information retrieval research group was formed by Gerard Salton at Cornell. By the 1970s several different retrieval techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents).10 Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.
In 1992, the US Department of Defense along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale to huge corpora. The introduction of web search engines has boosted the need for very large scale retrieval systems even further.
By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such as AltaVista (1995) and Yahoo! (1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding of Google, which introduced the PageRank algorithm11, using the web’s hyperlink structure to assess page importance and improve relevance ranking.
During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009, Microsoft launched Bing, introducing features that would later incorporate semantic web technologies through the development of its Satori knowledge base. Academic analysis12 have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing.
A major leap occurred in 2018, when Google deployed BERT (Bidirectional Encoder Representations from Transformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.13 BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.14
Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as the Text REtrieval Conference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchine Reading COmprehension) (2019)15 became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment16.
As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes: sparse, dense, and hybrid models. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals17. Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap18. Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems19.
As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms.
Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category):
Methods/Techniques in which information retrieval techniques are employed include:
In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model.
In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models20.
This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models2425.
Main article: Evaluation measures (information retrieval)
The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall. All measures assume a ground truth notion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevance.
Luk, R. W. P. (2022). "Why is information retrieval a scientific discipline?". Foundations of Science. 27 (2): 427–453. doi:10.1007/s10699-020-09685-x. hdl:10397/94873. S2CID 220506422. /wiki/Doi_(identifier) ↩
Jansen, B. J. and Rieh, S. (2010) The Seventeen Theoretical Constructs of Information Searching and Information Retrieval Archived 2016-03-04 at the Wayback Machine. Journal of the American Society for Information Sciences and Technology. 61(8), 1517–1534. https://faculty.ist.psu.edu/jjansen/academic/jansen_theoretical_constructs.pdf ↩
Goodrum, Abby A. (2000). "Image Information Retrieval: An Overview of Current Research". Informing Science. 3 (2). ↩
Foote, Jonathan (1999). "An overview of audio information retrieval". Multimedia Systems. 7: 2–10. CiteSeerX 10.1.1.39.6339. doi:10.1007/s005300050106. S2CID 2000641. /wiki/CiteSeerX_(identifier) ↩
Beel, Jöran; Gipp, Bela; Stiller, Jan-Olaf (2009). Information Retrieval On Mind Maps - What Could It Be Good For?. Proceedings of the 5th International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom'09). Washington, DC: IEEE. Archived from the original on 2011-05-13. Retrieved 2012-03-13. https://web.archive.org/web/20110513214422/http://www.sciplore.org/publications_en.php ↩
Frakes, William B.; Baeza-Yates, Ricardo (1992). Information Retrieval Data Structures & Algorithms. Prentice-Hall, Inc. ISBN 978-0-13-463837-9. Archived from the original on 2013-09-28. 978-0-13-463837-9 ↩
Singhal, Amit (2001). "Modern Information Retrieval: A Brief Overview" (PDF). Bulletin of the IEEE Computer Society Technical Committee on Data Engineering. 24 (4): 35–43. http://singhal.info/ieee2001.pdf ↩
Mark Sanderson & W. Bruce Croft (2012). "The History of Information Retrieval Research". Proceedings of the IEEE. 100: 1444–1451. doi:10.1109/jproc.2012.2189916. https://doi.org/10.1109%2Fjproc.2012.2189916 ↩
JE Holmstrom (1948). "'Section III. Opening Plenary Session". The Royal Society Scientific Information Conference, 21 June-2 July 1948: Report and Papers Submitted: 85. https://books.google.com/books?id=M34lAAAAMAAJ&q=univac ↩
"The Anatomy of a Search Engine". infolab.stanford.edu. Retrieved 2025-04-09. http://infolab.stanford.edu/~backrub/google.html ↩
Uyar, Ahmet; Aliyu, Farouk Musa (2015-01-01). "Evaluating search features of Google Knowledge Graph and Bing Satori: Entity types, list searches and query interfaces". Online Information Review. 39 (2): 197–213. doi:10.1108/OIR-10-2014-0257. ISSN 1468-4527. https://www.emerald.com/insight/content/doi/10.1108/oir-10-2014-0257/full/html ↩
Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805 [cs.CL]. /wiki/ArXiv_(identifier) ↩
Gardazi, Nadia Mushtaq; Daud, Ali; Malik, Muhammad Kamran; Bukhari, Amal; Alsahfi, Tariq; Alshemaimri, Bader (2025-03-15). "BERT applications in natural language processing: a review". Artificial Intelligence Review. 58 (6): 166. doi:10.1007/s10462-025-11162-5. ISSN 1573-7462. https://link.springer.com/article/10.1007/s10462-025-11162-5 ↩
Bajaj, Payal; Campos, Daniel; Craswell, Nick; Deng, Li; Gao, Jianfeng; Liu, Xiaodong; Majumder, Rangan; McNamara, Andrew; Mitra, Bhaskar; Nguyen, Tri; Rosenberg, Mir; Song, Xia; Stoica, Alina; Tiwary, Saurabh; Wang, Tong (2016). "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset". arXiv:1611.09268 [cs.CL]. /wiki/ArXiv_(identifier) ↩
Craswell, Nick; Mitra, Bhaskar; Yilmaz, Emine; Rahmani, Hossein A.; Campos, Daniel; Lin, Jimmy; Voorhees, Ellen M.; Soboroff, Ian (2024-02-28). "Overview of the TREC 2023 Deep Learning Track". {{cite journal}}: Cite journal requires |journal= (help) https://www.microsoft.com/en-us/research/publication/overview-of-the-trec-2023-deep-learning-track/ ↩
arXiv:2107.09226 /wiki/ArXiv_(identifier) ↩
Khattab, Omar; Zaharia, Matei (2020-07-25). "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT". Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '20. New York, NY, USA: Association for Computing Machinery. pp. 39–48. doi:10.1145/3397271.3401075. ISBN 978-1-4503-8016-4. 978-1-4503-8016-4 ↩
Lin, Jimmy; Nogueira, Rodrigo; Yates, Andrew (2020). "Pretrained Transformers for Text Ranking: BERT and Beyond". arXiv:2010.06467 [cs.IR]. /wiki/ArXiv_(identifier) ↩
Kim, Dohyun; Zhao, Lina; Chung, Eric; Park, Eun-Jae (2021). "Pressure-robust staggered DG methods for the Navier-Stokes equations on general meshes". arXiv:2107.09226 [math.NA]. /wiki/ArXiv_(identifier) ↩
Thakur, Nandan; Reimers, Nils; Rücklé, Andreas; Srivastava, Abhishek; Gurevych, Iryna (2021). "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models". arXiv:2104.08663 [cs.IR]. /wiki/ArXiv_(identifier) ↩
Lau, Jey Han; Armendariz, Carlos; Lappin, Shalom; Purver, Matthew; Shu, Chang (2020). Johnson, Mark; Roark, Brian; Nenkova, Ani (eds.). "How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context". Transactions of the Association for Computational Linguistics. 8: 296–310. doi:10.1162/tacl_a_00315. https://aclanthology.org/2020.tacl-1.20/ ↩
Arabzadeh, Negar; Yan, Xinyi; Clarke, Charles L. A. (2021). "Predicting Efficiency/Effectiveness Trade-offs for Dense vs. Sparse Retrieval Strategy Selection". arXiv:2109.10739 [cs.IR]. /wiki/ArXiv_(identifier) ↩
Mooers, Calvin N.; The Theory of Digital Handling of Non-numerical Information and its Implications to Machine Economics (Zator Technical Bulletin No. 48), cited in Fairthorne, R. A. (1958). "Automatic Retrieval of Recorded Information". The Computer Journal. 1 (1): 37. doi:10.1093/comjnl/1.1.36. https://babel.hathitrust.org/cgi/pt?id=mdp.39015034570591;view=1up;seq=3 ↩
Doyle, Lauren; Becker, Joseph (1975). Information Retrieval and Processing. Melville. pp. 410 pp. ISBN 978-0-471-22151-7. 978-0-471-22151-7 ↩
Perry, James W.; Kent, Allen; Berry, Madeline M. (1955). "Machine literature searching X. Machine language; factors underlying its design and development". American Documentation. 6 (4): 242–254. doi:10.1002/asi.5090060411. /wiki/Doi_(identifier) ↩
Maron, Melvin E. (2008). "An Historical Note on the Origins of Probabilistic Indexing" (PDF). Information Processing and Management. 44 (2): 971–972. doi:10.1016/j.ipm.2007.02.012. http://yunus.hacettepe.edu.tr/~tonta/courses/spring2008/bby703/maron-on-probabilistic%20indexing-2008.pdf ↩
N. Jardine, C.J. van Rijsbergen (December 1971). "The use of hierarchic clustering in information retrieval". Information Storage and Retrieval. 7 (5): 217–240. doi:10.1016/0020-0271(71)90051-9. /wiki/Doi_(identifier) ↩
Doszkocs, T.E. & Rapp, B.A. (1979). "Searching MEDLINE in English: a Prototype User Interface with Natural Language Query, Ranked Output, and relevance feedback," In: Proceedings of the ASIS Annual Meeting, 16: 131–139. ↩
Korfhage, Robert R. (1997). Information Storage and Retrieval. Wiley. pp. 368 pp. ISBN 978-0-471-14338-3. 978-0-471-14338-3 ↩
"History of Wikipedia", Wikipedia, 2025-02-21, retrieved 2025-04-09 https://en.wikipedia.org/wiki/History_of_Wikipedia ↩
Sullivan, Danny (2013-09-26). "FAQ: All About The New Google "Hummingbird" Algorithm". Search Engine Land. Retrieved 2025-04-09. https://searchengineland.com/google-hummingbird-172816 ↩
Khattab, Omar; Zaharia, Matei (2020). "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT". arXiv:2004.12832 [cs.IR]. /wiki/ArXiv_(identifier) ↩
Jones, Rosie; Zamani, Hamed; Schedl, Markus; Chen, Ching-Wei; Reddy, Sravana; Clifton, Ann; Karlgren, Jussi; Hashemi, Helia; Pappu, Aasish; Nazari, Zahra; Yang, Longqi; Semerci, Oguz; Bouchard, Hugues; Carterette, Ben (2021-07-11). "Current Challenges and Future Directions in Podcast Information Access". Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '21. New York, NY, USA: Association for Computing Machinery. pp. 1554–1565. arXiv:2106.09227. doi:10.1145/3404835.3462805. ISBN 978-1-4503-8037-9. 978-1-4503-8037-9 ↩