The following tables compare notable software frameworks, libraries, and computer programs for deep learning applications.
Deep learning software by name
Software | Creator | Initial release | Software license1 | Open source | Platform | Written in | Interface | OpenMP support | OpenCL support | CUDA support | ROCm support2 | Automatic differentiation3 | Has pretrained models | Recurrent nets | Convolutional nets | RBM/DBNs | Parallel execution(multi node) | Actively developed |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BigDL | Jason Dai (Intel) | 2016 | Apache 2.0 | Yes | Apache Spark | Scala | Scala, Python | No | No | Yes | Yes | Yes | Yes | |||||
Caffe | Berkeley Vision and Learning Center | 2013 | BSD | Yes | Linux, macOS, Windows4 | C++ | Python, MATLAB, C++ | Yes | Under development5 | Yes | No | Yes | Yes6 | Yes | Yes | No | ? | No7 |
Chainer | Preferred Networks | 2015 | BSD | Yes | Linux, macOS | Python | Python | No | No | Yes | No | Yes | Yes | Yes | Yes | No | Yes | No8 |
Deeplearning4j | Skymind engineering team; Deeplearning4j community; originally Adam Gibson | 2014 | Apache 2.0 | Yes | Linux, macOS, Windows, Android (Cross-platform) | C++, Java | Java, Scala, Clojure, Python (Keras), Kotlin | Yes | No9 | Yes1011 | No | Computational Graph | Yes12 | Yes | Yes | Yes | Yes13 | Yes |
Dlib | Davis King | 2002 | Boost Software License | Yes | Cross-platform | C++ | C++, Python | Yes | No | Yes | No | Yes | Yes | No | Yes | Yes | Yes | Yes |
Flux | Mike Innes | 2017 | MIT license | Yes | Linux, MacOS, Windows (Cross-platform) | Julia | Julia | Yes | No | Yes | Yes14 | Yes | Yes | No | Yes | Yes | ||
Intel Data Analytics Acceleration Library | Intel | 2015 | Apache License 2.0 | Yes | Linux, macOS, Windows on Intel CPU15 | C++, Python, Java | C++, Python, Java16 | Yes | No | No | No | Yes | No | Yes | Yes | Yes | ||
Intel Math Kernel Library 2017 17 and later | Intel | 2017 | Proprietary | No | Linux, macOS, Windows on Intel CPU18 | C/C++, DPC++, Fortran | C19 | Yes20 | No | No | No | Yes | No | Yes21 | Yes22 | No | Yes | |
Google JAX | 2018 | Apache License 2.0 | Yes | Linux, macOS, Windows | Python | Python | Only on Linux | No | Yes | No | Yes | Yes | ||||||
Keras | François Chollet | 2015 | MIT license | Yes | Linux, macOS, Windows | Python | Python, R | Only if using Theano as backend | Can use Theano, Tensorflow or PlaidML as backends | Yes | No | Yes | Yes23 | Yes | Yes | No24 | Yes25 | Yes |
MATLAB + Deep Learning Toolbox (formally Neural Network Toolbox) | MathWorks | 1992 | Proprietary | No | Linux, macOS, Windows | C, C++, Java, MATLAB | MATLAB | No | No | Train with Parallel Computing Toolbox and generate CUDA code with GPU Coder26 | No | Yes27 | Yes2829 | Yes30 | Yes31 | Yes | With Parallel Computing Toolbox32 | Yes |
Microsoft Cognitive Toolkit (CNTK) | Microsoft Research | 2016 | MIT license33 | Yes | Windows, Linux34 (macOS via Docker on roadmap) | C++ | Python (Keras), C++, Command line,35 BrainScript36 (.NET on roadmap37) | Yes38 | No | Yes | No | Yes | Yes39 | Yes40 | Yes41 | No42 | Yes43 | No44 |
ML.NET | Microsoft | 2018 | MIT license | Yes | Windows, Linux, macOS | C#, C++ | C#, F# | Yes | ||||||||||
Apache MXNet | Apache Software Foundation | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows,4546 AWS, Android,47 iOS, JavaScript48 | Small C++ core library | C++, Python, Julia, MATLAB, JavaScript, Go, R, Scala, Perl, Clojure | Yes | No | Yes | No | Yes49 | Yes50 | Yes | Yes | Yes | Yes51 | No |
Neural Designer | Artelnics | 2014 | Proprietary | No | Linux, macOS, Windows | C++ | Graphical user interface | Yes | No | Yes | No | Analytical differentiation | No | No | No | No | Yes | Yes |
OpenNN | Artelnics | 2003 | GNU LGPL | Yes | Cross-platform | C++ | C++ | Yes | No | Yes | No | ? | ? | No | No | No | ? | Yes |
PlaidML | Vertex.AI, Intel | 2017 | Apache 2.0 | Yes | Linux, macOS, Windows | Python, C++, OpenCL | Python, C++ | ? | Some OpenCL ICDs are not recognized | No | No | Yes | Yes | Yes | Yes | Yes | Yes | |
PyTorch | Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan (Facebook) | 2016 | BSD | Yes | Linux, macOS, Windows, Android52 | Python, C, C++, CUDA | Python, C++, Julia, R53 | Yes | Via separately maintained package545556 | Yes | Yes | Yes | Yes | Yes | Yes | Yes57 | Yes | Yes |
Apache SINGA | Apache Software Foundation | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows | C++ | Python, C++, Java | No | Supported in V1.0 | Yes | No | ? | Yes | Yes | Yes | Yes | Yes | Yes |
TensorFlow | Google Brain | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows,5859 Android | C++, Python, CUDA | Python (Keras), C/C++, Java, Go, JavaScript, R,60 Julia, Swift | No | On roadmap61 but already with SYCL62 support | Yes | Yes | Yes63 | Yes64 | Yes | Yes | Yes | Yes | Yes |
Theano | Université de Montréal | 2007 | BSD | Yes | Cross-platform | Python | Python (Keras) | Yes | Under development65 | Yes | No | Yes6667 | Through Lasagne's model zoo68 | Yes | Yes | Yes | Yes69 | No |
Torch | Ronan Collobert, Koray Kavukcuoglu, Clement Farabet | 2002 | BSD | Yes | Linux, macOS, Windows,70 Android,71 iOS | C, Lua | Lua, LuaJIT,72 C, utility library for C++/OpenCL73 | Yes | Third party implementations7475 | Yes7677 | No | Through Twitter's Autograd78 | Yes79 | Yes | Yes | Yes | Yes80 | No |
Wolfram Mathematica 1081 and later | Wolfram Research | 2014 | Proprietary | No | Windows, macOS, Linux, Cloud computing | C++, Wolfram Language, CUDA | Wolfram Language | Yes | No | Yes | No | Yes | Yes82 | Yes | Yes | Yes | Yes83 | Yes |
Software | Creator | Initial release | Software license84 | Open source | Platform | Written in | Interface | OpenMP support | OpenCL support | CUDA support | ROCm support85 | Automatic differentiation86 | Has pretrained models | Recurrent nets | Convolutional nets | RBM/DBNs | Parallel execution(multi node) | Actively developed |
Comparison of machine learning model compatibility
Format name | Design goal | Compatible with other formats | Self-contained DNN Model | Pre-processing and Post-processing | Run-time configuration for tuning & calibration | DNN model interconnect | Common platform |
---|---|---|---|---|---|---|---|
TensorFlow, Keras, Caffe, Torch | Algorithm training | No | No / Separate files in most formats | No | No | No | Yes |
ONNX | Algorithm training | Yes | No / Separate files in most formats | No | No | No | Yes |
See also
- Comparison of numerical-analysis software
- Comparison of statistical packages
- Comparison of cognitive architectures
- List of datasets for machine-learning research
- List of numerical-analysis software
References
Licenses here are a summary, and are not taken to be complete statements of the licenses. Some libraries may use other libraries internally under different licenses ↩
"Deep Learning — ROCm 4.5.0 documentation". Archived from the original on 2022-12-05. Retrieved 2022-09-27. https://web.archive.org/web/20221205102733/https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learning.html ↩
Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 February 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG]. /wiki/ArXiv_(identifier) ↩
"Microsoft/caffe". GitHub. 30 October 2021. https://github.com/Microsoft/caffe ↩
"Caffe: a fast open framework for deep learning". July 19, 2019 – via GitHub. https://github.com/BVLC/caffe ↩
"Caffe | Model Zoo". caffe.berkeleyvision.org. http://caffe.berkeleyvision.org/model_zoo.html ↩
GitHub - BVLC/caffe: Caffe: a fast open framework for deep learning., Berkeley Vision and Learning Center, 2019-09-25, retrieved 2019-09-25 https://github.com/BVLC/caffe ↩
Preferred Networks Migrates its Deep Learning Research Platform to PyTorch, 2019-12-05, retrieved 2019-12-27 https://preferred.jp/en/news/pr20191205/ ↩
"Support for Open CL · Issue #27 · deeplearning4j/nd4j". GitHub. https://github.com/deeplearning4j/nd4j/issues/27 ↩
"N-Dimensional Scientific Computing for Java". Archived from the original on 2016-10-16. Retrieved 2016-02-05. https://web.archive.org/web/20161016094035/http://nd4j.org/gpu_native_backends.html ↩
"Comparing Top Deep Learning Frameworks". Deeplearning4j. Archived from the original on 2017-11-07. Retrieved 2017-10-31. https://web.archive.org/web/20171107011631/https://deeplearning4j.org/compare-dl4j-tensorflow-pytorch ↩
Chris Nicholson; Adam Gibson. "Deeplearning4j Models". Archived from the original on 2017-02-11. Retrieved 2016-03-02. https://web.archive.org/web/20170211020819/https://deeplearning4j.org/model-zoo ↩
Deeplearning4j. "Deeplearning4j on Spark". Deeplearning4j. Archived from the original on 2017-07-13. Retrieved 2016-09-01.{{cite web}}: CS1 maint: numeric names: authors list (link) https://web.archive.org/web/20170713012632/https://deeplearning4j.org/spark ↩
"Metalhead". FluxML. 29 October 2021. https://github.com/FluxML/Metalhead.jl ↩
"Intel® Data Analytics Acceleration Library (Intel® DAAL)". software.intel.com. November 20, 2018. https://software.intel.com/en-us/intel-daal ↩
"Intel® Data Analytics Acceleration Library (Intel® DAAL)". software.intel.com. November 20, 2018. https://software.intel.com/en-us/intel-daal ↩
"Intel® Math Kernel Library Release Notes and New Features". Intel. https://www.intel.com/content/www/us/en/developer/articles/release-notes/intel-math-kernel-library-release-notes-and-new-features.html ↩
"Intel® Math Kernel Library (Intel® MKL)". software.intel.com. September 11, 2018. https://software.intel.com/en-us/mkl ↩
"Deep Neural Network Functions". software.intel.com. May 24, 2019. https://software.intel.com/en-us/mkl-developer-reference-c-deep-neural-network-functions ↩
"Using Intel® MKL with Threaded Applications". software.intel.com. June 1, 2017. https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-using-intel-mkl-with-threaded-applications ↩
"Intel® Xeon Phi™ Delivers Competitive Performance For Deep Learning—And Getting Better Fast". software.intel.com. March 21, 2019. https://software.intel.com/en-us/articles/intel-xeon-phi-delivers-competitive-performance-for-deep-learning-and-getting-better-fast ↩
"Intel® Xeon Phi™ Delivers Competitive Performance For Deep Learning—And Getting Better Fast". software.intel.com. March 21, 2019. https://software.intel.com/en-us/articles/intel-xeon-phi-delivers-competitive-performance-for-deep-learning-and-getting-better-fast ↩
"Applications - Keras Documentation". keras.io. https://keras.io/applications/ ↩
"Is there RBM in Keras? · Issue #461 · keras-team/keras". GitHub. https://github.com/keras-team/keras/issues/461 ↩
"Does Keras support using multiple GPUs? · Issue #2436 · keras-team/keras". GitHub. https://github.com/keras-team/keras/issues/2436 ↩
"GPU Coder - MATLAB & Simulink". MathWorks. Retrieved 13 November 2017. https://www.mathworks.com/products/gpu-coder.html ↩
"Automatic Differentiation Background - MATLAB & Simulink". MathWorks. September 3, 2019. Retrieved November 19, 2019. https://www.mathworks.com/help/deeplearning/ug/deep-learning-with-automatic-differentiation-in-matlab.html ↩
"Neural Network Toolbox - MATLAB". MathWorks. Retrieved 13 November 2017. https://www.mathworks.com/products/neural-network.html ↩
"Deep Learning Models - MATLAB & Simulink". MathWorks. Retrieved 13 November 2017. https://www.mathworks.com/solutions/deep-learning/models.html ↩
"Neural Network Toolbox - MATLAB". MathWorks. Retrieved 13 November 2017. https://www.mathworks.com/products/neural-network.html ↩
"Neural Network Toolbox - MATLAB". MathWorks. Retrieved 13 November 2017. https://www.mathworks.com/products/neural-network.html ↩
"Parallel Computing Toolbox - MATLAB". MathWorks. Retrieved 13 November 2017. https://www.mathworks.com/products/parallel-computing.html ↩
"CNTK/LICENSE.md at master · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/blob/master/LICENSE.md ↩
"Setup CNTK on your machine". GitHub. https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-your-machine ↩
"CNTK usage overview". GitHub. https://github.com/Microsoft/CNTK/wiki/CNTK-usage-overview ↩
"BrainScript Network Builder". GitHub. https://github.com/Microsoft/CNTK/wiki/BrainScript-Network-Builder ↩
".NET Support · Issue #960 · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/issues/960 ↩
"How to train a model using multiple machines? · Issue #59 · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/issues/59#issuecomment-178104505 ↩
"Prebuilt models for image classification · Issue #140 · microsoft/CNTK". GitHub. https://github.com/microsoft/CNTK/issues/140 ↩
"CNTK - Computational Network Toolkit". Microsoft Corporation. http://www.cntk.ai/ ↩
"CNTK - Computational Network Toolkit". Microsoft Corporation. http://www.cntk.ai/ ↩
"Restricted Boltzmann Machine with CNTK #534". GitHub, Inc. 27 May 2016. Retrieved 30 October 2023. https://github.com/Microsoft/CNTK/issues/534 ↩
"Multiple GPUs and machines". Microsoft Corporation. https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines ↩
"Disclaimer". CNTK TEAM. 6 November 2021. https://github.com/Microsoft/CNTK#disclaimer ↩
"Releases · dmlc/mxnet". Github. https://github.com/dmlc/mxnet/releases ↩
"Installation Guide — mxnet documentation". Readthdocs. https://mxnet.readthedocs.io/en/latest/how_to/build.html#building-on-windows ↩
"MXNet Smart Device". ReadTheDocs. Archived from the original on 2016-09-21. Retrieved 2016-05-19. https://web.archive.org/web/20160921205959/http://mxnet.readthedocs.io/en/latest/how_to/smart_device.html ↩
"MXNet.js". Github. 28 October 2021. https://github.com/dmlc/mxnet.js ↩
"— Redirecting to mxnet.io". mxnet.readthedocs.io. https://mxnet.readthedocs.io/en/latest/ ↩
"Model Gallery". GitHub. 29 October 2022. https://github.com/dmlc/mxnet-model-gallery ↩
"Run MXNet on Multiple CPU/GPUs with Data Parallel". GitHub. https://mxnet.readthedocs.io/en/latest/how_to/multi_devices.html ↩
"PyTorch". Dec 17, 2021. https://pytorch.org/mobile/android/ ↩
"Falbel D, Luraschi J (2023). torch: Tensors and Neural Networks with 'GPU' Acceleration". torch.mlverse.org. Retrieved 2023-11-28. https://torch.mlverse.org/ ↩
"OpenCL build of pytorch: (in-progress, not useable) - hughperkins/pytorch-coriander". July 14, 2019 – via GitHub. https://github.com/hughperkins/pytorch-coriander ↩
"DLPrimitives/OpenCL out of tree backend for pytorch - artyom-beilis/pytorch_dlprim". Jan 21, 2022 – via GitHub. https://github.com/artyom-beilis/pytorch_dlprim ↩
"OpenCL Support · Issue #488 · pytorch/pytorch". GitHub. https://github.com/pytorch/pytorch/issues/488 ↩
"Restricted Boltzmann Machines (RBMs) in PyTorch". GitHub. 14 November 2022. https://github.com/GabrielBianconi/pytorch-rbm/blob/master/rbm.py ↩
"Install TensorFlow with pip". https://www.tensorflow.org/install/pip ↩
"TensorFlow 0.12 adds support for Windows". https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html ↩
Allaire, J.J.; Kalinowski, T.; Falbel, D.; Eddelbuettel, D.; Yuan, T.; Golding, N. (28 September 2023). "tensorflow: R Interface to 'TensorFlow'". The Comprehensive R Archive Network. Retrieved 30 October 2023. https://cran.r-project.org/web/packages/tensorflow/ ↩
"tensorflow/roadmap.md at master". GitHub. January 23, 2017. Retrieved May 21, 2017. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/about/roadmap.md ↩
"OpenCL support". GitHub. https://github.com/tensorflow/tensorflow/issues/22 ↩
"TensorFlow". TensorFlow. https://www.tensorflow.org/ ↩
"Models and examples built with TensorFlow". July 19, 2019 – via GitHub. https://github.com/tensorflow/models ↩
"Using the GPU: Theano 0.8.2 documentation". Archived from the original on 2017-04-01. Retrieved 2016-01-21. https://web.archive.org/web/20170401163303/http://deeplearning.net/software/theano/tutorial/using_gpu.html ↩
"gradient – Symbolic Differentiation — Theano 1.0.0 documentation". deeplearning.net. http://deeplearning.net/software/theano/library/gradient.html ↩
"Automatic vs. Symbolic differentiation". https://groups.google.com/d/msg/theano-users/mln5g2IuBSU/gespG36Lf_QJ ↩
"Recipes/modelzoo at master · Lasagne/Recipes". GitHub. https://github.com/Lasagne/Recipes/tree/master/modelzoo ↩
"Using multiple GPUs — Theano 1.0.0 documentation". deeplearning.net. http://deeplearning.net/software/theano/tutorial/using_multi_gpu.html ↩
"torch/torch7". July 18, 2019 – via GitHub. https://github.com/torch/torch7 ↩
"GitHub - soumith/torch-android: Torch-7 for Android". GitHub. 13 October 2021. https://github.com/soumith/torch-android ↩
"Torch7: A MATLAB-like Environment for Machine Learning" (PDF). http://ronan.collobert.com/pub/matos/2011_torch7_nipsw.pdf ↩
"GitHub - jonathantompson/jtorch: An OpenCL Torch Utility Library". GitHub. 18 November 2020. https://github.com/jonathantompson/jtorch ↩
"Cheatsheet". GitHub. https://github.com/torch/torch7/wiki/Cheatsheet#opencl ↩
"cltorch". GitHub. https://github.com/hughperkins/distro-cl ↩
"Torch CUDA backend". GitHub. https://github.com/torch/cutorch ↩
"Torch CUDA backend for nn". GitHub. https://github.com/torch/cunn ↩
"Autograd automatically differentiates native Torch code: twitter/torch-autograd". July 9, 2019 – via GitHub. https://github.com/twitter/torch-autograd ↩
"ModelZoo". GitHub. https://github.com/torch/torch7/wiki/ModelZoo ↩
"torch/torch7". July 18, 2019 – via GitHub. https://github.com/torch/torch7 ↩
"Launching Mathematica 10". Wolfram. https://blog.wolfram.com/2014/07/09/launching-mathematica-10-with-700-new-functions-and-a-crazy-amount-of-rd ↩
"Wolfram Neural Net Repository of Neural Network Models". resources.wolframcloud.com. http://resources.wolframcloud.com/NeuralNetRepository ↩
"Parallel Computing—Wolfram Language Documentation". reference.wolfram.com. https://reference.wolfram.com/language/guide/ParallelComputing.html.en ↩
Licenses here are a summary, and are not taken to be complete statements of the licenses. Some libraries may use other libraries internally under different licenses ↩
"Deep Learning — ROCm 4.5.0 documentation". Archived from the original on 2022-12-05. Retrieved 2022-09-27. https://web.archive.org/web/20221205102733/https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learning.html ↩
Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 February 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG]. /wiki/ArXiv_(identifier) ↩