Open-source artificial intelligence is an AI system that is freely available to use, study, modify, and share. These attributes extend to each of the system's components, including datasets, code, and model parameters, promoting a collaborative and transparent approach to AI development. Free and open-source software (FOSS) licenses, such as the Apache License, MIT License, and GNU General Public License, outline the terms under which open-source artificial intelligence can be accessed, modified, and redistributed.
The open-source model provides widespread access to new AI technologies, allowing individuals and organizations of all sizes to participate in AI research and development. This approach supports collaboration and allows for shared advancements within the field of artificial intelligence. In contrast, closed-source artificial intelligence is proprietary, restricting access to the source code and internal components. Only the owning company or organization can modify or distribute a closed-source artificial intelligence system, prioritizing control and protection of intellectual property over external contributions and transparency. Companies often develop closed products in an attempt to keep a competitive advantage in the marketplace. However, some experts suggest that open-source AI tools may have a development advantage over closed-source products and have the potential to overtake them in the marketplace.
Popular open-source artificial intelligence project categories include large language models, machine translation tools, and chatbots. For software developers to produce open-source artificial intelligence (AI) resources, they must trust the various other open-source software components they use in its development. Open-source AI software has been speculated to have potentially increased risk compared to closed-source AI as bad actors may remove safety protocols of public models as they wish. Similarly, closed-source AI has also been speculated to have an increased risk compared to open-source AI due to issues of dependence, privacy, opaque algorithms, corporate control and limited availability while potentially slowing beneficial innovation.
There also is a debate about the openness of AI systems as openness is differentiated – an article in Nature suggests that some systems presented as open, such as Meta's Llama 3, "offer little more than an API or the ability to download a model subject to distinctly non-open use restrictions". Such software has been criticized as "openwashing" systems that are better understood as closed. There are some works and frameworks that assess the openness of AI systems as well as a new definition by the Open Source Initiative about what constitutes open source AI.