Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
tf–idf
(term frequency–inverse document frequency) a numerical statistic intended to reflect the importance of a word to a document in a collection or text corpuscles

In information retrieval, tf–idf (term frequency–inverse document frequency) measures the importance of a word to a document within a corpus, adjusting for words that commonly appear. It improves upon the bag-of-words model by weighting words based on their frequency across the corpus, modeling documents as a multiset without considering word order. Widely used as a weighting factor in text mining, user modeling, and by search engines, tf–idf plays a crucial role in scoring and ranking documents’ relevance to user queries. A common ranking function sums tf–idf values for query terms, with many advanced functions building upon this approach.

We don't have any images related to tf–idf yet.
We don't have any YouTube videos related to tf–idf yet.
We don't have any PDF documents related to tf–idf yet.
We don't have any Books related to tf–idf yet.
We don't have any archived web articles related to tf–idf yet.

Motivations

Karen Spärck Jones (1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:3

The specificity of a term can be quantified as an inverse function of the number of documents in which it occurs.

For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:4

Worddfidf
Romeo11.57
salad21.27
Falstaff40.967
forest120.489
battle210.246
wit340.037
fool360.012
good370
sweet370

We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is.

Definition

  1. The tf–idf is the product of two statistics, term frequency and inverse document frequency. There are various ways for determining the exact values of both statistics.
  2. A formula that aims to define the importance of a keyword or phrase within a document or a web page.
Variants of term frequency (tf) weight
weighting schemetf weight
binary 0 , 1 {\displaystyle {0,1}}
raw count f t , d {\displaystyle f_{t,d}}
term frequency f t , d / ∑ t ′ ∈ d f t ′ , d {\displaystyle f_{t,d}{\Bigg /}{\sum _{t'\in d}{f_{t',d}}}}
log normalization log ⁡ ( 1 + f t , d ) {\displaystyle \log(1+f_{t,d})}
double normalization 0.5 0.5 + 0.5 ⋅ f t , d max { t ′ ∈ d } f t ′ , d {\displaystyle 0.5+0.5\cdot {\frac {f_{t,d}}{\max _{\{t'\in d\}}{f_{t',d}}}}}
double normalization K K + ( 1 − K ) f t , d max { t ′ ∈ d } f t ′ , d {\displaystyle K+(1-K){\frac {f_{t,d}}{\max _{\{t'\in d\}}{f_{t',d}}}}}

Term frequency

Term frequency, tf(t,d), is the relative frequency of term t within document d,

t f ( t , d ) = f t , d ∑ t ′ ∈ d f t ′ , d {\displaystyle \mathrm {tf} (t,d)={\frac {f_{t,d}}{\sum _{t'\in d}{f_{t',d}}}}} ,

where ft,d is the raw count of a term in a document, i.e., the number of times that term t occurs in document d. Note the denominator is simply the total number of terms in document d (counting each occurrence of the same term separately). There are various other ways to define term frequency:5: 128 

  • the raw count itself: tf(t,d) = ft,d
  • Boolean "frequencies": tf(t,d) = 1 if t occurs in d and 0 otherwise;
  • logarithmically scaled frequency: tf(t,d) = log (1 + ft,d);6
  • augmented frequency, to prevent a bias towards longer documents, e.g. raw frequency divided by the raw frequency of the most frequently occurring term in the document:
t f ( t , d ) = 0.5 + 0.5 ⋅ f t , d max { f t ′ , d : t ′ ∈ d } {\displaystyle \mathrm {tf} (t,d)=0.5+0.5\cdot {\frac {f_{t,d}}{\max\{f_{t',d}:t'\in d\}}}}

Inverse document frequency

Variants of inverse document frequency (idf) weight
weighting schemeidf weight ( n t = | { d ∈ D : t ∈ d } | {\displaystyle n_{t}=|\{d\in D:t\in d\}|} )
unary1
inverse document frequency log ⁡ N n t = − log ⁡ n t N {\displaystyle \log {\frac {N}{n_{t}}}=-\log {\frac {n_{t}}{N}}}
inverse document frequency smooth log ⁡ ( N 1 + n t ) + 1 {\displaystyle \log \left({\frac {N}{1+n_{t}}}\right)+1}
inverse document frequency max log ⁡ ( max { t ′ ∈ d } n t ′ 1 + n t ) {\displaystyle \log \left({\frac {\max _{\{t'\in d\}}n_{t'}}{1+n_{t}}}\right)}
probabilistic inverse document frequency log ⁡ N − n t n t {\displaystyle \log {\frac {N-n_{t}}{n_{t}}}}

The inverse document frequency is a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is the logarithmically scaled inverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient):

i d f ( t , D ) = log ⁡ N | { d : d ∈ D  and  t ∈ d } | {\displaystyle \mathrm {idf} (t,D)=\log {\frac {N}{|\{d:d\in D{\text{ and }}t\in d\}|}}}

with

  • N {\displaystyle N} : total number of documents in the corpus N = | D | {\displaystyle N={|D|}}
  • | { d ∈ D : t ∈ d } | {\displaystyle |\{d\in D:t\in d\}|}  : number of documents where the term t {\displaystyle t} appears (i.e., t f ( t , d ) ≠ 0 {\displaystyle \mathrm {tf} (t,d)\neq 0} ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the numerator 1 + N {\displaystyle 1+N} and denominator to 1 + | { d ∈ D : t ∈ d } | {\displaystyle 1+|\{d\in D:t\in d\}|} .

Term frequency–inverse document frequency

Variants of term frequency-inverse document frequency (tf–idf) weights
weighting schemetf-idf
count-idf f t , d ⋅ log ⁡ N n t {\displaystyle f_{t,d}\cdot \log {\frac {N}{n_{t}}}}
double normalization-idf ( 0.5 + 0.5 f t , q max t f t , q ) ⋅ log ⁡ N n t {\displaystyle \left(0.5+0.5{\frac {f_{t,q}}{\max _{t}f_{t,q}}}\right)\cdot \log {\frac {N}{n_{t}}}}
log normalization-idf ( 1 + log ⁡ f t , d ) ⋅ log ⁡ N n t {\displaystyle (1+\log f_{t,d})\cdot \log {\frac {N}{n_{t}}}}

Then tf–idf is calculated as

t f i d f ( t , d , D ) = t f ( t , d ) ⋅ i d f ( t , D ) {\displaystyle \mathrm {tfidf} (t,d,D)=\mathrm {tf} (t,d)\cdot \mathrm {idf} (t,D)}

A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0.

Justification of idf

Idf was introduced as "term specificity" by Karen Spärck Jones in a 1972 paper. Although it has worked well as a heuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to find information theoretic justifications for it.7

Spärck Jones's own explanation did not propose much theory, aside from a connection to Zipf's law.8 Attempts have been made to put idf on a probabilistic footing,9 by estimating the probability that a given document d contains a term t as the relative document frequency,

P ( t | D ) = | { d ∈ D : t ∈ d } | N , {\displaystyle P(t|D)={\frac {|\{d\in D:t\in d\}|}{N}},}

so that we can define idf as

i d f = − log ⁡ P ( t | D ) = log ⁡ 1 P ( t | D ) = log ⁡ N | { d ∈ D : t ∈ d } | {\displaystyle {\begin{aligned}\mathrm {idf} &=-\log P(t|D)\\&=\log {\frac {1}{P(t|D)}}\\&=\log {\frac {N}{|\{d\in D:t\in d\}|}}\end{aligned}}}

Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency.

This probabilistic interpretation in turn takes the same form as that of self-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriate event spaces for the required probability distributions: not only documents need to be taken into account, but also queries and terms.10

Both term frequency and inverse document frequency can be formulated in terms of information theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distribution p ( d , t ) {\displaystyle p(d,t)} is that:

p ( d | t ) = 1 | { d ∈ D : t ∈ d } | {\displaystyle p(d|t)={\frac {1}{|\{d\in D:t\in d\}|}}}

This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."11

The conditional entropy of a "randomly chosen" document in the corpus D {\displaystyle D} , conditional to the fact it contains a specific term t {\displaystyle t} (and assuming that all documents have equal probability to be chosen) is:

H ( D | T = t ) = − ∑ d p d | t log ⁡ p d | t = − log ⁡ 1 | { d ∈ D : t ∈ d } | = log ⁡ | { d ∈ D : t ∈ d } | | D | + log ⁡ | D | = − i d f ( t ) + log ⁡ | D | {\displaystyle H({\cal {D}}|{\cal {T}}=t)=-\sum _{d}p_{d|t}\log p_{d|t}=-\log {\frac {1}{|\{d\in D:t\in d\}|}}=\log {\frac {|\{d\in D:t\in d\}|}{|D|}}+\log |D|=-\mathrm {idf} (t)+\log |D|}

In terms of notation, D {\displaystyle {\cal {D}}} and T {\displaystyle {\cal {T}}} are "random variables" corresponding to respectively draw a document or a term. The mutual information can be expressed as

M ( T ; D ) = H ( D ) − H ( D | T ) = ∑ t p t ⋅ ( H ( D ) − H ( D | W = t ) ) = ∑ t p t ⋅ i d f ( t ) {\displaystyle M({\cal {T}};{\cal {D}})=H({\cal {D}})-H({\cal {D}}|{\cal {T}})=\sum _{t}p_{t}\cdot (H({\cal {D}})-H({\cal {D}}|W=t))=\sum _{t}p_{t}\cdot \mathrm {idf} (t)}

The last step is to expand p t {\displaystyle p_{t}} , the unconditional probability to draw a term, with respect to the (random) choice of a document, to obtain:

M ( T ; D ) = ∑ t , d p t | d ⋅ p d ⋅ i d f ( t ) = ∑ t , d t f ( t , d ) ⋅ 1 | D | ⋅ i d f ( t ) = 1 | D | ∑ t , d t f ( t , d ) ⋅ i d f ( t ) . {\displaystyle M({\cal {T}};{\cal {D}})=\sum _{t,d}p_{t|d}\cdot p_{d}\cdot \mathrm {idf} (t)=\sum _{t,d}\mathrm {tf} (t,d)\cdot {\frac {1}{|D|}}\cdot \mathrm {idf} (t)={\frac {1}{|D|}}\sum _{t,d}\mathrm {tf} (t,d)\cdot \mathrm {idf} (t).}

This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.12 Each Tf–idf hence carries the "bit of information" attached to a term x document pair.

Example of tf–idf

Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right.

Document 2
TermTerm Count
this1
is1
another2
example3
Document 1
TermTerm Count
this1
is1
a2
sample1

The calculation of tf–idf for the term "this" is performed as follows:

In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller.

t f ( ″ t h i s ″ , d 1 ) = 1 5 = 0.2 {\displaystyle \mathrm {tf} ({\mathsf {''this''}},d_{1})={\frac {1}{5}}=0.2} t f ( ″ t h i s ″ , d 2 ) = 1 7 ≈ 0.14 {\displaystyle \mathrm {tf} ({\mathsf {''this''}},d_{2})={\frac {1}{7}}\approx 0.14}

An idf is constant per corpus, and accounts for the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this".

i d f ( ″ t h i s ″ , D ) = log ⁡ ( 2 2 ) = 0 {\displaystyle \mathrm {idf} ({\mathsf {''this''}},D)=\log \left({\frac {2}{2}}\right)=0}

So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents.

t f i d f ( ″ t h i s ″ , d 1 , D ) = 0.2 × 0 = 0 {\displaystyle \mathrm {tfidf} ({\mathsf {''this''}},d_{1},D)=0.2\times 0=0} t f i d f ( ″ t h i s ″ , d 2 , D ) = 0.14 × 0 = 0 {\displaystyle \mathrm {tfidf} ({\mathsf {''this''}},d_{2},D)=0.14\times 0=0}

The word "example" is more interesting - it occurs three times, but only in the second document:

t f ( ″ e x a m p l e ″ , d 1 ) = 0 5 = 0 {\displaystyle \mathrm {tf} ({\mathsf {''example''}},d_{1})={\frac {0}{5}}=0} t f ( ″ e x a m p l e ″ , d 2 ) = 3 7 ≈ 0.429 {\displaystyle \mathrm {tf} ({\mathsf {''example''}},d_{2})={\frac {3}{7}}\approx 0.429} i d f ( ″ e x a m p l e ″ , D ) = log ⁡ ( 2 1 ) = 0.301 {\displaystyle \mathrm {idf} ({\mathsf {''example''}},D)=\log \left({\frac {2}{1}}\right)=0.301}

Finally,

t f i d f ( ″ e x a m p l e ″ , d 1 , D ) = t f ( ″ e x a m p l e ″ , d 1 ) × i d f ( ″ e x a m p l e ″ , D ) = 0 × 0.301 = 0 {\displaystyle \mathrm {tfidf} ({\mathsf {''example''}},d_{1},D)=\mathrm {tf} ({\mathsf {''example''}},d_{1})\times \mathrm {idf} ({\mathsf {''example''}},D)=0\times 0.301=0} t f i d f ( ″ e x a m p l e ″ , d 2 , D ) = t f ( ″ e x a m p l e ″ , d 2 ) × i d f ( ″ e x a m p l e ″ , D ) = 0.429 × 0.301 ≈ 0.129 {\displaystyle \mathrm {tfidf} ({\mathsf {''example''}},d_{2},D)=\mathrm {tf} ({\mathsf {''example''}},d_{2})\times \mathrm {idf} ({\mathsf {''example''}},D)=0.429\times 0.301\approx 0.129}

(using the base 10 logarithm).

Beyond terms

The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.13 The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,14 and entire sentences.15 However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.16

Derivatives

A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).17 TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,18 idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.

See also

References

  1. Rajaraman, A.; Ullman, J.D. (2011). "Data Mining" (PDF). Mining of Massive Datasets. pp. 1–17. doi:10.1017/CBO9781139058452.002. ISBN 978-1-139-05845-2. 978-1-139-05845-2

  2. Breitinger, Corinna; Gipp, Bela; Langer, Stefan (2015-07-26). "Research-paper recommender systems: a literature survey". International Journal on Digital Libraries. 17 (4): 305–338. doi:10.1007/s00799-015-0156-0. ISSN 1432-5012. S2CID 207035184. https://kops.uni-konstanz.de/bitstreams/8b886e4a-ea4b-4eae-bba1-19918f353170/download

  3. Spärck Jones, K. (1972). "A Statistical Interpretation of Term Specificity and Its Application in Retrieval". Journal of Documentation. 28 (1): 11–21. CiteSeerX 10.1.1.115.8343. doi:10.1108/eb026526. S2CID 2996187. /wiki/Karen_Sp%C3%A4rck_Jones

  4. Speech and Language Processing (3rd ed. draft), Dan Jurafsky and James H. Martin, chapter 14.https://web.stanford.edu/~jurafsky/slp3/14.pdf https://web.stanford.edu/~jurafsky/slp3/14.pdf

  5. Manning, C.D.; Raghavan, P.; Schutze, H. (2008). "Scoring, term weighting, and the vector space model" (PDF). Introduction to Information Retrieval. p. 100. doi:10.1017/CBO9780511809071.007. ISBN 978-0-511-80907-1. 978-0-511-80907-1

  6. "TFIDF statistics | SAX-VSM". https://jmotif.github.io/sax-vsm_site/morea/algorithm/TFIDF.html

  7. Robertson, S. (2004). "Understanding inverse document frequency: On theoretical arguments for IDF". Journal of Documentation. 60 (5): 503–520. doi:10.1108/00220410410560582. /wiki/Stephen_Robertson_(computer_scientist)

  8. Robertson, S. (2004). "Understanding inverse document frequency: On theoretical arguments for IDF". Journal of Documentation. 60 (5): 503–520. doi:10.1108/00220410410560582. /wiki/Stephen_Robertson_(computer_scientist)

  9. See also Probability estimates in practice in Introduction to Information Retrieval. http://nlp.stanford.edu/IR-book/html/htmledition/probability-estimates-in-practice-1.html#p:justificationofidf

  10. Robertson, S. (2004). "Understanding inverse document frequency: On theoretical arguments for IDF". Journal of Documentation. 60 (5): 503–520. doi:10.1108/00220410410560582. /wiki/Stephen_Robertson_(computer_scientist)

  11. Aizawa, Akiko (2003). "An information-theoretic perspective of tf–idf measures". Information Processing and Management. 39 (1): 45–65. doi:10.1016/S0306-4573(02)00021-3. S2CID 45793141. /wiki/Doi_(identifier)

  12. Aizawa, Akiko (2003). "An information-theoretic perspective of tf–idf measures". Information Processing and Management. 39 (1): 45–65. doi:10.1016/S0306-4573(02)00021-3. S2CID 45793141. /wiki/Doi_(identifier)

  13. Bollacker, Kurt D.; Lawrence, Steve; Giles, C. Lee (1998-01-01). "CiteSeer". Proceedings of the second international conference on Autonomous agents - AGENTS '98. pp. 116–123. doi:10.1145/280765.280786. ISBN 978-0-89791-983-8. S2CID 3526393. 978-0-89791-983-8

  14. Sivic, Josef; Zisserman, Andrew (2003-01-01). "Video Google: A text retrieval approach to object matching in videos". Proceedings Ninth IEEE International Conference on Computer Vision. ICCV '03. pp. 1470–. doi:10.1109/ICCV.2003.1238663. ISBN 978-0-7695-1950-0. S2CID 14457153. 978-0-7695-1950-0

  15. Seki, Yohei. "Sentence Extraction by tf/idf and Position Weighting from Newspaper Articles" (PDF). National Institute of Informatics. http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings3/NTCIR3-TSC-SekiY.pdf

  16. Beel, Joeran; Breitinger, Corinna (2017). "Evaluating the CC-IDF citation-weighting scheme – How effectively can 'Inverse Document Frequency' (IDF) be applied to references?" (PDF). Proceedings of the 12th IConference. Archived from the original (PDF) on 2020-09-22. Retrieved 2017-01-29. https://web.archive.org/web/20200922150304/http://beel.org/publications/2017%20iConference%20--%20Evaluating%20the%20CC-IDF%20citation-weighting%20scheme%20--%20preprint.pdf

  17. Khoo Khyou Bun; Bun, Khoo Khyou; Ishizuka, M. (2001). "Emerging Topic Tracking System". Proceedings Third International Workshop on Advanced Issues of E-Commerce and Web-Based Information Systems. WECWIS 2001. pp. 2–11. CiteSeerX 10.1.1.16.7986. doi:10.1109/wecwis.2001.933900. ISBN 978-0-7695-1224-2. S2CID 1049263. 978-0-7695-1224-2

  18. Langer, Stefan; Gipp, Bela (2017). "TF-IDuF: A Novel Term-Weighting Scheme for User Modeling based on Users' Personal Document Collections" (PDF). IConference. https://www.gipp.com/wp-content/papercite-data/pdf/beel17.pdf