Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Generative pre-trained transformer
Type of large language model

A generative pre-trained transformer (GPT) is a type of large language model and a key framework in generative artificial intelligence, based on the transformer deep learning architecture. Originally introduced by OpenAI in 2018, GPT models are pre-trained on large data sets of unlabeled text to generate human-like content and are widely used in natural language processing. OpenAI’s series, culminating in GPT-4o (2024), powers the popular ChatGPT chatbot. Other organizations like EleutherAI and Cerebras have developed their own GPT models, while companies such as Salesforce and Bloomberg create domain-specific GPTs.

Related Image Collections Add Image
We don't have any YouTube videos related to Generative pre-trained transformer yet.
We don't have any PDF documents related to Generative pre-trained transformer yet.
We don't have any Books related to Generative pre-trained transformer yet.

History

Initial developments

Generative pretraining (GP) was a long-established concept in machine learning applications.1920 It was originally used as a form of semi-supervised learning, as the model is trained first on an unlabeled dataset (pretraining step) by learning to generate datapoints in the dataset, and then it is trained to classify a labeled dataset.21

There were three main types of early GP. The hidden Markov models learn a generative model of sequences for downstream applications. For example, in speech recognition, a trained HMM infers the most likely hidden sequence for a speech signal, and the hidden sequence is taken as the phonemes of the speech signal. These were developed in the 1970s and became widely applied in speech recognition in the 1980s.2223

The compressors learn to compress data such as images and textual sequences, and the compressed data serves as a good representation for downstream applications such as facial recognition.242526 The autoencoders similarly learn a latent representation of data for later downstream applications such as speech recognition.2728 The connection between autoencoders and algorithmic compressors was noted in 1993.29

See also: Transformer (deep learning architecture) § History

During the 2010s, the problem of machine translation was solved by recurrent neural networks, with attention mechanism added. This was optimized into the transformer architecture, published by Google researchers in Attention Is All You Need (2017).30 That development led to the emergence of large language models such as BERT (2018)31 which was a pre-trained transformer (PT) but not designed to be generative (BERT was an "encoder-only" model). Also in 2018, OpenAI published Improving Language Understanding by Generative Pre-Training, which introduced GPT-1, the first in its GPT series.32

Previously in 2017, some of the authors who would later work on GPT-1 worked on generative pre-training of language with LSTM, which resulted in a model that could represent text with vectors that could easily be fine-tuned for downstream applications.33

Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employed supervised learning from large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models.34

The semi-supervised approach OpenAI employed to make a large-scale generative system—and was first to do with a transformer model—involved two stages: an unsupervised generative "pretraining" stage to set initial parameters using a language modeling objective, and a supervised discriminative "fine-tuning" stage to adapt these parameters to a target task.35

Later developments

Regarding more recent GPT foundation models, OpenAI published its first versions of GPT-3 in July 2020. There were three models, with 1B, 6.7B, 175B parameters, respectively named babbage, curie, and davinci (giving initials B, C, and D).

In July 2021, OpenAI published Codex, a task-specific GPT model targeted for programming applications. This was developed by fine-tuning a 12B parameter version of GPT-3 (different from previous GPT-3 models) using code from GitHub.36

In March 2022, OpenAI published two versions of GPT-3 that were fine-tuned for instruction-following (instruction-tuned), named davinci-instruct-beta (175B) and text-davinci-001,37 and then started beta testing code-davinci-002.38 text-davinci-002 was instruction-tuned from code-davinci-002. Both text-davinci-003 and ChatGPT were released in November 2022, with both building upon text-davinci-002 via reinforcement learning from human feedback (RLHF). text-davinci-003 is trained for following instructions (like its predecessors), whereas ChatGPT is further trained for conversational interaction with a human user.3940

OpenAI's most recent GPT foundation model, GPT-4, was released on March 14, 2023. It can be accessed directly by users via a premium version of ChatGPT, and is available to developers for incorporation into other products and services via OpenAI's API. Other producers of GPT foundation models include EleutherAI (with a series of models starting in March 2021)41 and Cerebras (with seven models released in March 2023).42

Foundation models

A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks.4344

Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing "the competitive landscape and the safety implications of large-scale models").45

OpenAI's GPT-n series
ModelArchitectureParameter countTraining dataRelease dateTraining cost
GPT-112-level, 12-headed Transformer decoder (no encoder), followed by linear-softmax.117 millionBookCorpus:46 4.5 GB of text, from 7,000 unpublished books of various genres.June 11, 20184730 days on 8 P600 graphics cards, or 1 petaFLOPS-day.48
GPT-2GPT-1, but with modified normalization1.5 billionWebText: 40 GB of text, 8 million documents, from 45 million webpages upvoted on Reddit.February 14, 2019 (initial/limited version) and November 5, 2019 (full version)49"tens of petaFLOPS-days",50 or 1.5 × 1021 FLOPS.51
GPT-3GPT-2, but with modification to allow larger scaling175 billion52499 billion tokens consisting of CommonCrawl (570 GB), WebText, English Wikipedia, and two books corpora (Books1 and Books2).May 28, 2020533640 petaFLOPS-days (Table D.154), or 3.1 × 1023 FLOPS.55
GPT-3.5Undisclosed175 billion56UndisclosedMarch 15, 2022Undisclosed
GPT-4Also trained with both text prediction and RLHF; accepts both text and images as input. Further details are not public.57Undisclosed. Estimated 1.7 trillion.58UndisclosedMarch 14, 2023Undisclosed. Estimated 2.1 × 1025 FLOPS.59
GPT-4o???May 13, 2024?
GPT-4.5???February 27, 2025?
GPT-4.1???April 14, 2025?

Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has been made available to developers via an API,6061 and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs).62 Meta AI (formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA.63

Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text).64 Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion65 and parallel decoding.66 Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.67

Task-specific models

A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering.68

An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models.6970 Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings.71 Other instruction-tuned models have been released by others, including a fully open version.7273

Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT.74 They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft),75 and Google's competing chatbot Gemini (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM).76

Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user.77 This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well.78

Multimodality

Generative transformer-based systems can also be targeted for tasks involving modalities beyond text. For example, Microsoft's "Visual ChatGPT" combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text.79 Also, advances in text-to-speech technology offer tools for audio content creation when used in conjunction with foundational GPT language models.80

Domain-specificity

GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows:

  • EinsteinGPT – for sales and marketing domains, to aid with customer relationship management (uses GPT-3.5)8182
  • BloombergGPT – for the financial domain, to aid with financial news and information (uses "freely available" AI methods, combined with their proprietary data)83
  • Khanmigo – described as a GPT version for tutoring, in the education domain, it aids students using Khan Academy by guiding them through their studies without directly providing answers (powered by GPT-4)8485
  • SlackGPT – for the Slack instant-messaging service, to aid with navigating and summarizing discussions on it (uses OpenAI's API)86
  • BioGPT – for the biomedical domain, to aid with biomedical literature text generation and mining (uses GPT-2)87

Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI's ChatGPT interface,8889 and Google Workspace has available add-ons such as "GPT for Sheets and Docs"—which is reported to aid use of spreadsheet functionality in Google Sheets.9091

Brand issues

OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as a brand of OpenAI.92 In April 2023, OpenAI revised the brand guidelines in its terms of service to indicate that other businesses using its API to run their artificial intelligence (AI) services would no longer be able to include "GPT" in such names or branding.93 In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist).94 As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT",95 but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" that are being called GPTs on the OpenAI site.96 OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged".97

Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term "GPT" in the field of AI.98 OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023.99 In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic.100 As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain a registered U.S. trademark does not preclude some level of common-law trademark rights in the U.S.,101 and/or trademark rights in other countries.102

For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-based chatbot product, ChatGPT,103104 for which OpenAI has separately sought protection (and which it has sought to enforce more strongly).105 Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted,106107 as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers.108109110111 In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion.112113 If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine of descriptive fair use could still continue non-brand-related usage.114

Selected bibliography

This section lists the main official publications from OpenAI and Microsoft on their GPT models.

  • GPT-1: report,115 GitHub release.116
  • GPT-2: blog announcement,117 report on its decision of "staged release",118 GitHub release.119
  • GPT-3: report.120 No GitHub or any other form of code release thenceforth.
  • WebGPT: blog announcement,121 report,122
  • InstructGPT: blog announcement,123 report.124
  • ChatGPT: blog announcement (no report).125
  • GPT-4: blog announcement,126 reports,127128 model card.129
  • GPT-4o: blog announcement.130
  • GPT-4.5: blog announcement.131
  • GPT-4.1: blog announcement.132

See also

References

  1. Haddad, Mohammed. "How does GPT-4 work and how can you start using it in ChatGPT?". www.aljazeera.com. Archived from the original on July 5, 2023. Retrieved April 10, 2023. https://www.aljazeera.com/news/2023/3/15/how-do-ai-models-like-gpt-4-work-and-how-can-you-start-using-it

  2. "Generative AI: a game-changer society needs to be ready for". World Economic Forum. January 9, 2023. Archived from the original on April 25, 2023. Retrieved April 8, 2023. https://www.weforum.org/agenda/2023/01/davos23-generative-ai-a-game-changer-industries-and-society-code-developers/

  3. "The A to Z of Artificial Intelligence". Time. April 13, 2023. Archived from the original on June 16, 2023. Retrieved April 14, 2023. https://time.com/6271657/a-to-z-of-artificial-intelligence/

  4. Hu, Luhui (November 15, 2022). "Generative AI and Future". Medium. Archived from the original on June 5, 2023. Retrieved April 29, 2023. https://pub.towardsai.net/generative-ai-and-future-c3b1695876f2

  5. "CSDL | IEEE Computer Society". www.computer.org. Archived from the original on April 28, 2023. Retrieved April 29, 2023. https://www.computer.org/csdl/magazine/co/2022/10/09903869/1H0G6xvtREk

  6. "LibGuides: Using AI Language Models : ChatGPT". Archived from the original on December 8, 2023. Retrieved December 7, 2023. https://hallmark.libguides.com/c.php?g=1312147&p=9644939

  7. "Generative AI: a game-changer society needs to be ready for". World Economic Forum. January 9, 2023. Archived from the original on April 25, 2023. Retrieved April 8, 2023. https://www.weforum.org/agenda/2023/01/davos23-generative-ai-a-game-changer-industries-and-society-code-developers/

  8. "The A to Z of Artificial Intelligence". Time. April 13, 2023. Archived from the original on June 16, 2023. Retrieved April 14, 2023. https://time.com/6271657/a-to-z-of-artificial-intelligence/

  9. Toews, Rob. "The Next Generation Of Large Language Models". Forbes. Archived from the original on April 14, 2023. Retrieved April 9, 2023. https://www.forbes.com/sites/robtoews/2023/02/07/the-next-generation-of-large-language-models/

  10. Mckendrick, Joe (March 13, 2023). "Most Jobs Soon To Be 'Influenced' By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests". Forbes. Archived from the original on April 16, 2023. Retrieved April 16, 2023. https://www.forbes.com/sites/joemckendrick/2023/03/26/most-jobs-soon-to-be-influenced-by-artificial-intelligence-research-out-of-openai-and-university-of-pennsylvania-suggests/?sh=420f9c8f73c7

  11. "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on March 18, 2023. Retrieved March 18, 2023. https://openai.com/research/language-unsupervised

  12. "GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared". MUO. April 11, 2023. Archived from the original on April 15, 2023. Retrieved May 3, 2023. https://www.makeuseof.com/gpt-models-explained-and-compared/

  13. "GPT-4". openai.com. Archived from the original on March 14, 2023. Retrieved December 8, 2023. https://openai.com/research/gpt-4

  14. Haddad, Mohammed. "How does GPT-4 work and how can you start using it in ChatGPT?". www.aljazeera.com. Archived from the original on July 5, 2023. Retrieved April 10, 2023. https://www.aljazeera.com/news/2023/3/15/how-do-ai-models-like-gpt-4-work-and-how-can-you-start-using-it

  15. Alford, Anthony (July 13, 2021). "EleutherAI Open-Sources Six Billion Parameter GPT-3 Clone GPT-J". InfoQ. Archived from the original on February 10, 2023. Retrieved April 3, 2023. https://www.infoq.com/news/2021/07/eleutherai-gpt-j/

  16. "News" (Press release). Archived from the original on April 5, 2023. Retrieved April 5, 2023. https://www.businesswire.com/news/home/20230328005366/en/Cerebras-Systems-Releases-Seven-New-GPT-Models-Trained-on-CS-2-Wafer-Scale-Systems

  17. Morrison, Ryan (March 7, 2023). "Salesforce launches EinsteinGPT built with OpenAI technology". Tech Monitor. Archived from the original on April 15, 2023. Retrieved April 10, 2023. https://techmonitor.ai/technology/ai-and-automation/salesforce-einsteingpt-openai-chatgpt

  18. "The ChatGPT of Finance is Here, Bloomberg is Combining AI and Fintech". Forbes. Archived from the original on April 6, 2023. Retrieved April 6, 2023. https://www.forbes.com/sites/jamielsheikh/2023/04/05/the-chatgpt-of-finance-is-here-bloomberg-is-combining-ai-and-fintech/?sh=43b4385e3081

  19. Hinton (et-al), Geoffrey (October 15, 2012). "Deep neural networks for acoustic modeling in speech recognition" (PDF). IEEE Signal Processing Magazine. Digital Object Identifier 10.1109/MSP.2012.2205597. doi:10.1109/MSP.2012.2205597. S2CID 206485943. Archived (PDF) from the original on March 18, 2023. Retrieved April 27, 2023. http://cs224d.stanford.edu/papers/maas_paper.pdf

  20. Deng, Li (January 22, 2014). "A tutorial survey of architectures, algorithms, and applications for deep learning | APSIPA Transactions on Signal and Information Processing | Cambridge Core". Apsipa Transactions on Signal and Information Processing. 3. Cambridge.org: e2. doi:10.1017/atsip.2013.9. S2CID 9928823. https://doi.org/10.1017%2Fatsip.2013.9

  21. Erhan, Dumitru; Courville, Aaron; Bengio, Yoshua; Vincent, Pascal (March 31, 2010). "Why Does Unsupervised Pre-training Help Deep Learning?". Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings: 201–208. Archived from the original on January 24, 2024. Retrieved January 24, 2024. https://proceedings.mlr.press/v9/erhan10a.html

  22. "First-Hand:The Hidden Markov Model – Engineering and Technology History Wiki". ethw.org. January 12, 2015. Archived from the original on April 3, 2018. Retrieved May 1, 2018. http://ethw.org/First-Hand:The_Hidden_Markov_Model

  23. Juang, B. H.; Rabiner, L. R. (1991). "Hidden Markov Models for Speech Recognition". Technometrics. 33 (3): 251–272. doi:10.2307/1268779. ISSN 0040-1706. JSTOR 1268779. Archived from the original on October 8, 2024. Retrieved October 4, 2024. https://www.jstor.org/stable/1268779

  24. Cottrell, Garrison W.; Munro, Paul; Zipser, David (1987). "Learning Internal Representation From Gray-Scale Images: An Example of Extensional Programming". Proceedings of the Annual Meeting of the Cognitive Science Society. 9. Archived from the original on October 7, 2024. Retrieved October 4, 2024. https://escholarship.org/uc/item/2zs7w6z8

  25. Cottrell, Garrison W. (January 1, 1991), Touretzky, David S.; Elman, Jeffrey L.; Sejnowski, Terrence J.; Hinton, Geoffrey E. (eds.), "Extracting features from faces using compression networks: Face, identity, emotion, and gender recognition using holons", Connectionist Models, Morgan Kaufmann, pp. 328–337, ISBN 978-1-4832-1448-1, archived from the original on October 7, 2024, retrieved October 4, 2024 978-1-4832-1448-1

  26. Schmidhuber, Jürgen (1992). "Learning complex, extended sequences using the principle of history compression" (PDF). Neural Computation. 4 (2): 234–242. doi:10.1162/neco.1992.4.2.234. S2CID 18271205. https://gwern.net/doc/ai/nn/rnn/1992-schmidhuber.pdf

  27. Elman, Jeffrey L.; Zipser, David (April 1, 1988). "Learning the hidden structure of speech". The Journal of the Acoustical Society of America. 83 (4): 1615–1626. Bibcode:1988ASAJ...83.1615E. doi:10.1121/1.395916. ISSN 0001-4966. PMID 3372872. Archived from the original on October 7, 2024. Retrieved October 4, 2024. https://pubs.aip.org/jasa/article/83/4/1615/826094/Learning-the-hidden-structure-of-speechLearning

  28. Bourlard, H.; Kamp, Y. (1988). "Auto-association by multilayer perceptrons and singular value decomposition". Biological Cybernetics. 59 (4–5): 291–294. doi:10.1007/BF00332918. PMID 3196773. S2CID 206775335. Archived from the original on June 27, 2021. Retrieved October 4, 2024. http://infoscience.epfl.ch/record/82601

  29. Hinton, Geoffrey E; Zemel, Richard (1993). "Autoencoders, Minimum Description Length and Helmholtz Free Energy". Advances in Neural Information Processing Systems. 6. Morgan-Kaufmann. Archived from the original on August 14, 2024. Retrieved October 4, 2024. https://proceedings.neurips.cc/paper/1993/hash/9e3cfc48eccf81a0d57663e129aef3cb-Abstract.html

  30. Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. Archived (PDF) from the original on February 21, 2024. Retrieved January 28, 2024. /wiki/Ashish_Vaswani

  31. Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (May 24, 2019). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". Association for Computational Linguistics. arXiv:1810.04805. /wiki/ArXiv_(identifier)

  32. Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (June 11, 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on January 26, 2021. Retrieved January 23, 2021. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

  33. Radford, Alec; Jozefowicz, Rafal; Sutskever, Ilya (April 6, 2017). "Learning to Generate Reviews and Discovering Sentiment". arXiv:1704.01444 [cs.LG]. /wiki/ArXiv_(identifier)

  34. Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (June 11, 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on January 26, 2021. Retrieved January 23, 2021. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

  35. Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (June 11, 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on January 26, 2021. Retrieved January 23, 2021. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

  36. Chen, Mark; Tworek, Jerry; Jun, Heewoo; Yuan, Qiming; Ponde de Oliveira Pinto, Henrique; Kaplan, Jared; Edwards, Harri; Burda, Yuri; Joseph, Nicholas; Brockman, Greg; Ray, Alex; Puri, Raul; Krueger, Gretchen; Petrov, Michael; Khlaaf, Heidy (July 1, 2021). "Evaluating Large Language Models Trained on Code". Association for Computational Linguistics. arXiv:2107.03374. /wiki/ArXiv_(identifier)

  37. Ouyang, Long; Wu, Jeffrey; Jiang, Xu; Almeida, Diogo; Wainwright, Carroll; Mishkin, Pamela; Zhang, Chong; Agarwal, Sandhini; Slama, Katarina; Ray, Alex; Schulman, John; Hilton, Jacob; Kelton, Fraser; Miller, Luke; Simens, Maddie (December 6, 2022). "Training language models to follow instructions with human feedback". Advances in Neural Information Processing Systems. 35: 27730–27744. arXiv:2203.02155. Archived from the original on June 28, 2023. Retrieved June 24, 2023. https://proceedings.neurips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html

  38. "New GPT-3 capabilities: Edit & insert". openai.com. Archived from the original on June 29, 2023. Retrieved June 24, 2023. https://openai.com/blog/gpt-3-edit-insert

  39. Fu, Yao; Peng, Hao; Khot, Tushar (2022). "How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources". Yao Fu's Notion. Archived from the original on April 19, 2023. Retrieved June 24, 2023. https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1

  40. "Model index for researchers". OpenAI API. Archived from the original on June 23, 2023. Retrieved June 23, 2023. https://platform.openai.com/docs/model-index-for-researchers

  41. Alford, Anthony (July 13, 2021). "EleutherAI Open-Sources Six Billion Parameter GPT-3 Clone GPT-J". InfoQ. Archived from the original on February 10, 2023. Retrieved April 3, 2023. https://www.infoq.com/news/2021/07/eleutherai-gpt-j/

  42. "News" (Press release). Archived from the original on April 5, 2023. Retrieved April 5, 2023. https://www.businesswire.com/news/home/20230328005366/en/Cerebras-Systems-Releases-Seven-New-GPT-Models-Trained-on-CS-2-Wafer-Scale-Systems

  43. "Introducing the Center for Research on Foundation Models (CRFM)". Stanford HAI. August 18, 2021. Archived from the original on June 4, 2023. Retrieved April 26, 2023. https://hai.stanford.edu/news/introducing-center-research-foundation-models-crfm

  44. "Reflections on Foundation Models". hai.stanford.edu. October 18, 2021. Archived from the original on August 15, 2024. Retrieved August 15, 2024. https://hai.stanford.edu/news/reflections-foundation-models

  45. OpenAI (2023). "GPT-4 Technical Report" (PDF). Archived (PDF) from the original on March 14, 2023. Retrieved March 16, 2023. https://cdn.openai.com/papers/gpt-4.pdf

  46. Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. IEEE International Conference on Computer Vision (ICCV) 2015. pp. 19–27. arXiv:1506.06724. Archived from the original on February 5, 2023. Retrieved February 7, 2023. https://www.cv-foundation.org/openaccess/content_iccv_2015/html/Zhu_Aligning_Books_and_ICCV_2015_paper.html

  47. "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on March 18, 2023. Retrieved March 18, 2023. https://openai.com/research/language-unsupervised

  48. "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on March 18, 2023. Retrieved March 18, 2023. https://openai.com/research/language-unsupervised

  49. Vincent, James (November 7, 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on June 11, 2020. Retrieved April 28, 2023. https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters

  50. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". NeurIPS. arXiv:2005.14165v4. /wiki/ArXiv_(identifier)

  51. "ML input trends visualization". Epoch. Archived from the original on July 16, 2023. Retrieved May 2, 2023. https://epochai.org/mlinputs/visualization

  52. Ver Meer, Dave (June 1, 2023). "ChatGPT Statistics". NamePepper. Archived from the original on June 5, 2023. Retrieved June 9, 2023. https://www.namepepper.com/chatgpt-users

  53. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". NeurIPS. arXiv:2005.14165v4. /wiki/ArXiv_(identifier)

  54. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". NeurIPS. arXiv:2005.14165v4. /wiki/ArXiv_(identifier)

  55. "ML input trends visualization". Epoch. Archived from the original on July 16, 2023. Retrieved May 2, 2023. https://epochai.org/mlinputs/visualization

  56. Ver Meer, Dave (June 1, 2023). "ChatGPT Statistics". NamePepper. Archived from the original on June 5, 2023. Retrieved June 9, 2023. https://www.namepepper.com/chatgpt-users

  57. OpenAI (2023). "GPT-4 Technical Report" (PDF). Archived (PDF) from the original on March 14, 2023. Retrieved March 16, 2023. https://cdn.openai.com/papers/gpt-4.pdf

  58. "GPT-4 has more than a trillion parameters – Report". March 25, 2023. Archived from the original on March 4, 2024. Retrieved October 23, 2023. https://the-decoder.com/gpt-4-has-a-trillion-parameters/

  59. "ML input trends visualization". Epoch. Archived from the original on July 16, 2023. Retrieved May 2, 2023. https://epochai.org/mlinputs/visualization

  60. Vincent, James (March 14, 2023). "Google opens up its AI language model PaLM to challenge OpenAI and GPT-3". The Verge. Archived from the original on March 14, 2023. Retrieved April 29, 2023. https://www.theverge.com/2023/3/14/23639313/google-ai-language-model-palm-api-challenge-openai

  61. "Google Opens Access to PaLM Language Model". Archived from the original on May 31, 2023. Retrieved April 29, 2023. https://aibusiness.com/nlp/google-opens-access-to-palm-language-model

  62. Iyer, Aparna (November 30, 2022). "Meet GPT-JT, the Closest Open Source Alternative to GPT-3". Analytics India Magazine. Archived from the original on June 2, 2023. Retrieved April 29, 2023. https://analyticsindiamag.com/meet-gpt-jt-the-closest-open-source-alternative-to-gpt-3/

  63. "Meta Debuts AI Language Model, But It's Only for Researchers". PCMAG. February 24, 2023. Archived from the original on July 19, 2023. Retrieved May 21, 2023. https://www.pcmag.com/news/meta-debuts-ai-language-model-but-its-only-for-researchers

  64. Islam, Arham (March 27, 2023). "Multimodal Language Models: The Future of Artificial Intelligence (AI)". Archived from the original on May 15, 2023. Retrieved May 15, 2023. https://web.archive.org/web/20230515010932/https://www.marktechpost.com/2023/03/27/multimodal-language-models-the-future-of-artificial-intelligence-ai/

  65. Islam, Arham (November 14, 2022). "How Do DALL·E 2, Stable Diffusion, and Midjourney Work?". Archived from the original on July 18, 2023. Retrieved May 21, 2023. https://www.marktechpost.com/2022/11/14/how-do-dall%c2%b7e-2-stable-diffusion-and-midjourney-work/

  66. Saha, Shritama (January 4, 2023). "Google Launches Muse, A New Text-to-Image Transformer Model". Analytics India Magazine. Archived from the original on May 15, 2023. Retrieved May 15, 2023. https://analyticsindiamag.com/google-launches-muse-a-new-text-to-image-transformer-model/

  67. Wu (et-al), Chenfei (March 8, 2023). "Visual ChatGPT". arXiv:2303.04671 [cs.CV]. /wiki/ArXiv_(identifier)

  68. Bommasani (et-al), Rishi (July 12, 2022). "On the Opportunities and Risks of Foundation Models". arXiv:2108.07258 [cs.LG]. /wiki/ArXiv_(identifier)

  69. "Aligning language models to follow instructions". openai.com. Archived from the original on March 23, 2023. Retrieved March 23, 2023. https://openai.com/research/instruction-following

  70. Ouyang, Long; Wu, Jeff; Jiang, Xu; et al. (November 4, 2022). "Training language models to follow instructions with human feedback". NeurIPS. arXiv:2203.02155. /wiki/ArXiv_(identifier)

  71. Ramnani, Meeta (January 28, 2022). "OpenAI dumps its own GPT-3 for something called InstructGPT, and for right reason". Analytics India Magazine. Archived from the original on June 4, 2023. Retrieved April 29, 2023. https://analyticsindiamag.com/openai-dumps-its-own-gpt-3-for-something-called-instructgpt-and-for-right-reason/

  72. "Stanford CRFM". crfm.stanford.edu. Archived from the original on April 6, 2023. Retrieved May 15, 2023. https://crfm.stanford.edu/2023/03/13/alpaca.html

  73. "Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM". Databricks. April 12, 2023. Archived from the original on July 14, 2023. Retrieved May 15, 2023. https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm

  74. "Introducing ChatGPT". openai.com. Archived from the original on March 16, 2023. Retrieved March 16, 2023. https://openai.com/blog/chatgpt

  75. Wiggers, Kyle (May 4, 2023). "Microsoft doubles down on AI with new Bing features". Archived from the original on December 7, 2023. Retrieved May 4, 2023. https://techcrunch.com/2023/05/04/microsoft-doubles-down-on-ai-with-new-bing-features/

  76. "ChatGPT vs. Bing vs. Google Bard: Which AI Is the Most Helpful?". CNET. Archived from the original on July 24, 2023. Retrieved April 30, 2023. https://www.cnet.com/tech/services-and-software/chatgpt-vs-bing-vs-google-bard-which-ai-is-the-most-helpful/

  77. "Auto-GPT, BabyAGI, and AgentGPT: How to use AI agents". Mashable. April 19, 2023. Archived from the original on July 22, 2023. Retrieved May 15, 2023. https://mashable.com/article/autogpt-ai-agents-how-to-get-access

  78. Marr, Bernard. "Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT". Forbes. Archived from the original on May 21, 2023. Retrieved May 15, 2023. https://www.forbes.com/sites/bernardmarr/2023/04/24/auto-gpt-may-be-the-strong-ai-tool-that-surpasses-chatgpt/

  79. "Microsoft Open-Sources Multimodal Chatbot Visual ChatGPT". InfoQ. Archived from the original on June 3, 2023. Retrieved May 15, 2023. https://www.infoq.com/news/2023/04/microsoft-visual-chatgpt/

  80. Edwards, Benj (January 9, 2023). "Microsoft's new AI can simulate anyone's voice with 3 seconds of audio". Ars Technica. Archived from the original on July 18, 2023. Retrieved May 15, 2023. https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/

  81. Morrison, Ryan (March 7, 2023). "Salesforce launches EinsteinGPT built with OpenAI technology". Archived from the original on April 15, 2023. Retrieved April 10, 2023. https://techmonitor.ai/technology/ai-and-automation/salesforce-einsteingpt-openai-chatgpt

  82. Sharma, Animesh K.; Sharma, Rahul (2023). "The role of generative pretrained transformers (GPTs) in revolutionising digital marketing: A conceptual model". Journal of Cultural Marketing Strategy. 8 (1): 80–90. doi:10.69554/TLVQ2275. https://ideas.repec.org/s/aza/jcms00.html

  83. Leswing, Kif (April 13, 2023). "Bloomberg plans to integrate GPT-style A.I. into its terminal". CNBC. Archived from the original on May 19, 2023. Retrieved May 4, 2023. https://www.cnbc.com/2023/04/13/bloomberg-plans-to-integrate-gpt-style-ai-into-its-terminal.html

  84. Melendez, Steven (May 4, 2023). "Learning nonprofit Khan Academy is piloting a version of GPT called Khanmigo". Fast Company. Archived from the original on May 11, 2023. Retrieved May 22, 2023. https://www.fastcompany.com/90891522/the-learning-nonprofit-khan-academy-piloting-a-version-of-gpt-called-khanmigo

  85. "Khan Academy Pilots GPT-4 Powered Tool Khanmigo for Teachers". THE Journal. Archived from the original on May 7, 2023. Retrieved May 7, 2023. https://thejournal.com/articles/2023/03/14/khan-academy-pilots-gpt-4-powered-tool-khanmigo-for-teachers.aspx

  86. Hachman, Mark (May 4, 2023). "Slack GPT will bring AI chatbots to your conversations". PCWorld. Archived from the original on June 9, 2023. Retrieved May 4, 2023. https://www.pcworld.com/article/1807402/slack-gpt-will-bring-ai-chatbots-to-your-conversations.html

  87. Luo (et-al), Renqian (April 3, 2023). "BioGPT: Generative pre-trained transformer for biomedical text generation and mining". Briefings in Bioinformatics. 23 (6). arXiv:2210.10341. doi:10.1093/bib/bbac409. PMID 36156661. /wiki/ArXiv_(identifier)

  88. John, Amy Sarah (May 5, 2023). "Know about ChatGPT's 13 best plugins, designed to improve your overall user experience". Latest Digital Transformation Trends | Cloud News | Wire19. Archived from the original on May 9, 2023. Retrieved May 7, 2023. https://web.archive.org/web/20230509151243/https://wire19.com/chatgpt-plugins/

  89. "ChatGPT plugins". openai.com. March 13, 2024. Archived from the original on March 23, 2023. Retrieved May 7, 2023. https://openai.com/blog/chatgpt-plugins

  90. "How to Use ChatGPT on Google Sheets With GPT for Sheets and Docs". MUO. March 12, 2023. Archived from the original on June 19, 2023. Retrieved May 7, 2023. https://www.makeuseof.com/how-use-chatgpt-google-sheets/

  91. Asay, Matt (February 27, 2023). "Embrace and extend Excel for AI data prep". InfoWorld. Archived from the original on June 2, 2023. Retrieved May 7, 2023. https://www.infoworld.com/article/3689175/embrace-and-extend-excel-for-ai-data-prep.html

  92. Hicks, William (May 10, 2023). "ChatGPT creator OpenAI is asking startups to remove 'GPT' from their names". The Business Journal. Archived from the original on June 28, 2023. Retrieved May 21, 2023. https://www.bizjournals.com/sanfrancisco/inno/stories/news/2023/05/10/openai-startups-gpt.html

  93. OpenAI (April 24, 2023). "Brand Guidelines". Archived from the original on July 18, 2023. Retrieved May 21, 2023. https://openai.com/brand

  94. Hicks, William (May 10, 2023). "ChatGPT creator OpenAI is asking startups to remove 'GPT' from their names". The Business Journal. Archived from the original on June 28, 2023. Retrieved May 21, 2023. https://www.bizjournals.com/sanfrancisco/inno/stories/news/2023/05/10/openai-startups-gpt.html

  95. "Brand guidelines". Archived from the original on July 18, 2023. Retrieved November 28, 2023. https://openai.com/brand#models

  96. "Introducing GPTS". March 13, 2024. Archived from the original on March 20, 2024. Retrieved November 28, 2023. https://openai.com/blog/introducing-gpts

  97. "Brand guidelines". Archived from the original on July 18, 2023. Retrieved November 28, 2023. https://openai.com/brand#models

  98. Hicks, William (May 10, 2023). "ChatGPT creator OpenAI is asking startups to remove 'GPT' from their names". The Business Journal. Archived from the original on June 28, 2023. Retrieved May 21, 2023. https://www.bizjournals.com/sanfrancisco/inno/stories/news/2023/05/10/openai-startups-gpt.html

  99. Heah, Alexa (April 26, 2023). "OpenAI Unsuccessful At Speeding Up Its Attempt To Trademark 'GPT'". DesignTAXI. Archived from the original on April 26, 2023. Retrieved May 21, 2023. https://designtaxi.com/news/423211/OpenAI-Unsuccessful-At-Speeding-Up-Its-Attempt-To-Trademark-GPT/

  100. "NONFINAL OFFICE ACTION". USPTO. May 25, 2023. Archived from the original on December 3, 2023. Retrieved December 30, 2023. https://tsdr.uspto.gov/documentviewer?caseId=sn97733259&docId=NFIN20230525093517#docIndex=4&page=1

  101. "U.S. Trademark Law". December 2015. Archived from the original on January 17, 2024. Retrieved November 29, 2023. https://digital.gov/resources/u-s-trademark-law/

  102. "International Trademark Rights". Archived from the original on March 11, 2024. Retrieved November 29, 2023. https://www.inta.org/fact-sheets/international-trademark-rights/

  103. Heah, Alexa (April 26, 2023). "OpenAI Unsuccessful At Speeding Up Its Attempt To Trademark 'GPT'". DesignTAXI. Archived from the original on April 26, 2023. Retrieved May 21, 2023. https://designtaxi.com/news/423211/OpenAI-Unsuccessful-At-Speeding-Up-Its-Attempt-To-Trademark-GPT/

  104. "OpenAI Wants to Trademark 'GPT' Amid Rise of AI Chatbots". Tech Times. April 25, 2023. Archived from the original on April 25, 2023. Retrieved May 21, 2023. https://www.techtimes.com/articles/290766/20230425/openai-trademark-gpt-chatgpt-rise-ai-chatbots.htm

  105. Louise, Nickie (April 3, 2023). "OpenAI files a UDRP case against the current owner of ChatGPT.com". Archived from the original on June 5, 2023. Retrieved May 21, 2023. https://techstartups.com/2023/04/03/openai-files-a-udrp-case-against-the-current-owner-of-chatgpt-com/

  106. Hicks, William (May 10, 2023). "ChatGPT creator OpenAI is asking startups to remove 'GPT' from their names". The Business Journal. Archived from the original on June 28, 2023. Retrieved May 21, 2023. https://www.bizjournals.com/sanfrancisco/inno/stories/news/2023/05/10/openai-startups-gpt.html

  107. Demcak, Tramatm-Igor (April 26, 2023). "OpenAI's Battle for Brand Protection: Can GPT be trademarked?". Lexology. Archived from the original on May 5, 2023. Retrieved May 22, 2023. https://web.archive.org/web/20230505162827/https://www.lexology.com/library/detail.aspx?g=763049f7-7ef8-4a68-bdb1-2e4fa194b7ad

  108. "The A to Z of Artificial Intelligence". Time. April 13, 2023. Archived from the original on June 16, 2023. Retrieved April 14, 2023. https://time.com/6271657/a-to-z-of-artificial-intelligence/

  109. Lawton, George (April 20, 2023). "ChatGPT vs. GPT: How are they different? | TechTarget". Enterprise AI. Archived from the original on May 9, 2023. Retrieved May 21, 2023. https://web.archive.org/web/20230509150052/https://www.techtarget.com/searchenterpriseai/feature/ChatGPT-vs-GPT-How-are-they-different

  110. Robb, Drew (April 12, 2023). "GPT-4 vs. ChatGPT: AI Chatbot Comparison". eWEEK. Archived from the original on July 27, 2023. Retrieved May 21, 2023. https://www.eweek.com/artificial-intelligence/gpt-4-vs-chatgpt/

  111. Russo, Philip (August 22, 2023). "The Genesis of Generative AI for Everything Everywhere All at Once in CRE". Commercial Observer. Archived from the original on August 24, 2023. https://commercialobserver.com/2023/08/jll-ai-gpt-proptech/

  112. Demcak, Tramatm-Igor (April 26, 2023). "OpenAI's Battle for Brand Protection: Can GPT be trademarked?". Lexology. Archived from the original on May 5, 2023. Retrieved May 22, 2023. https://web.archive.org/web/20230505162827/https://www.lexology.com/library/detail.aspx?g=763049f7-7ef8-4a68-bdb1-2e4fa194b7ad

  113. "Trademark infringement". Archived from the original on November 30, 2023. Retrieved November 29, 2023. https://www.law.cornell.edu/wex/trademark_infringement

  114. Rheintgen, Husch Blackwell LLP-Kathleen A. (August 16, 2013). "Branding 101: trademark descriptive fair use". Lexology. Archived from the original on May 21, 2023. Retrieved May 21, 2023. https://www.lexology.com/library/detail.aspx?g=4f7fc6dd-1d5f-41a1-beac-2638750faa75

  115. "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on March 18, 2023. Retrieved March 18, 2023. https://openai.com/research/language-unsupervised

  116. finetune-transformer-lm, OpenAI, June 11, 2018, archived from the original on May 19, 2023, retrieved May 1, 2023 https://github.com/openai/finetune-transformer-lm

  117. "GPT-2: 1.5B release". openai.com. Archived from the original on March 31, 2023. Retrieved May 1, 2023. https://openai.com/research/gpt-2-1-5b-release

  118. Solaiman, Irene; Brundage, Miles; Clark, Jack; Askell, Amanda; Herbert-Voss, Ariel; Wu, Jeff; Radford, Alec; Krueger, Gretchen; Kim, Jong Wook; Kreps, Sarah; McCain, Miles; Newhouse, Alex; Blazakis, Jason; McGuffie, Kris; Wang, Jasmine (November 12, 2019). "Release Strategies and the Social Impacts of Language Models". arXiv:1908.09203 [cs.CL]. /wiki/Irene_Solaiman

  119. gpt-2, OpenAI, May 1, 2023, archived from the original on March 11, 2023, retrieved May 1, 2023 https://github.com/openai/gpt-2

  120. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". NeurIPS. arXiv:2005.14165v4. /wiki/ArXiv_(identifier)

  121. "WebGPT: Improving the factual accuracy of language models through web browsing". openai.com. Archived from the original on June 21, 2023. Retrieved July 2, 2023. https://web.archive.org/web/20230621182942/https://openai.com/research/webgpt

  122. Nakano, Reiichiro; Hilton, Jacob; Balaji, Suchir; Wu, Jeff; Ouyang, Long; Kim, Christina; Hesse, Christopher; Jain, Shantanu; Kosaraju, Vineet; Saunders, William; Jiang, Xu; Cobbe, Karl; Eloundou, Tyna; Krueger, Gretchen; Button, Kevin (December 1, 2021). "WebGPT: Browser-assisted question-answering with human feedback". CoRR. arXiv:2112.09332. Archived from the original on July 2, 2023. Retrieved July 2, 2023. /wiki/Suchir_Balaji

  123. "Aligning language models to follow instructions". openai.com. Archived from the original on March 23, 2023. Retrieved March 23, 2023. https://openai.com/research/instruction-following

  124. Ouyang, Long; Wu, Jeff; Jiang, Xu; et al. (November 4, 2022). "Training language models to follow instructions with human feedback". NeurIPS. arXiv:2203.02155. /wiki/ArXiv_(identifier)

  125. "Introducing ChatGPT". openai.com. Archived from the original on March 16, 2023. Retrieved March 16, 2023. https://openai.com/blog/chatgpt

  126. "GPT-4". openai.com. Archived from the original on March 14, 2023. Retrieved May 1, 2023. https://openai.com/research/gpt-4

  127. OpenAI (March 27, 2023). "GPT-4 Technical Report". arXiv:2303.08774 [cs.CL]. /wiki/ArXiv_(identifier)

  128. Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (April 13, 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL]. /wiki/ArXiv_(identifier)

  129. GPT-4 System Card Archived April 7, 2023, at the Wayback Machine, OpenAI, March 23, 2023 (Accessed May 22, 2023). https://cdn.openai.com/papers/gpt-4-system-card.pdf

  130. "Hello GPT-4o". OpenAI. May 13, 2024. Archived from the original on May 14, 2024. Retrieved August 8, 2024. https://openai.com/index/hello-gpt-4o/

  131. "Introducing GPT-4.5". OpenAI. February 27, 2025. Archived from the original on March 19, 2025. Retrieved March 18, 2025. https://openai.com/index/introducing-gpt-4-5

  132. "Introducing GPT-4.1 in the API". OpenAI. April 14, 2025. Archived from the original on May 17, 2025. Retrieved April 14, 2025. https://openai.com/index/gpt-4-1/