Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Neural processing unit
Hardware acceleration unit for artificial intelligence tasks

A neural processing unit (NPU), also known as AI accelerator or deep learning processor, is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence (AI) and machine learning applications, including artificial neural networks and computer vision.

They can be used either to efficiently execute already trained AI models (inference) or for training AI models. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024[update], a typical AI integrated circuit chip contains tens of billions of MOSFETs.

AI accelerators are used in mobile devices such as Apple iPhones and Huawei cellphones, and personal computers such as Intel laptops, AMD laptops and Apple silicon Macs. Accelerators are used in cloud computing servers, including tensor processing units (TPU) in Google Cloud Platform and Trainium and Inferentia chips in Amazon Web Services. Many vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

Graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware, and are commonly used as AI accelerators, both for training and inference.

We don't have any images related to Neural processing unit yet.
We don't have any YouTube videos related to Neural processing unit yet.
We don't have any PDF documents related to Neural processing unit yet.
We don't have any Books related to Neural processing unit yet.

History

Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attempts

First attempts like Intel's ETANN 80170NX incorporated analog circuits to compute neural functions.13

Later all-digital chips like the Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators to accelerate optical character recognition software.14

By 1988, Wei Zhang et al. had discussed fast optical implementations of convolutional neural networks for alphabet recognition.1516

In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.1718

FPGA-based accelerators were also first explored in the 1990s for both inference and training.1920

In 2014, Chen et al. proposed DianNao (Chinese for "electric brain"),21 to accelerate deep neural networks especially. DianNao provides 452 Gop/s peak performance (of key operations in deep neural networks) in a footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao,22 ShiDianNao,23 PuDianNao24) were proposed by the same group, forming the DianNao Family25

Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015.2627

Heterogeneous computing

Main article: Heterogeneous computing

Heterogeneous computing incorporates many specialized processors in a single system, or a single chip, each optimized for a specific type of task. Architectures such as the Cell microprocessor28 have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing throughput over latency. The Cell microprocessor has been applied to a number of tasks293031 including AI.323334

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low-precision data types.35 Due to the increasing performance of CPUs, they are also used for running AI workloads. CPUs are superior for DNNs with small or medium-scale parallelism, for sparse DNNs and in low-batch-size scenarios.

Use of GPUs

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.3637

In 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet,38 which won the champion of the ISLVRC-2012 competition. During the 2010s, GPU manufacturers such as Nvidia added deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library).

Over the 2010s GPUs continued to evolve in a direction to facilitate deep learning, both for training and inference in devices such as self-driving cars.3940 GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from. As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network-specific hardware to further accelerate these tasks.4142 Tensor cores are intended to speed up the training of neural networks.43

GPUs continue to be used in large-scale AI applications. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory,44 contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms.

Use of FPGAs

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks, and software alongside each other.45464748

Microsoft has used FPGA chips to accelerate inference for real-time deep learning services.49

Use of NPUs

Neural Processing Units (NPU) are another more native approach. Since 2017, several CPUs and SoCs have on-die NPUs: for example, Intel Meteor Lake, Apple A11.

Emergence of dedicated AI accelerator ASICs

While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency5051 may be gained with a more specific design, via an application-specific integrated circuit (ASIC).52 These accelerators employ strategies such as optimized memory use and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.5354 Some low-precision floating-point formats used for AI acceleration are half-precision and the bfloat16 floating-point format.5556 Cerebras Systems has built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads.5758Amazon Web Services NeuronCores are heterogenous compute-units that power Tranium, Tranium2, Inferentia, and Inferentia2 chips consisting of 4 main engines: Tensor, Vector, Scalar, and GPSIMD, with on-chip software-managed SRAM memory to manage data locality and data prefetch.59

Ongoing research

In-memory computing architectures

In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.60 In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.61 The system is based on phase-change memory arrays.62

In-memory computing with analog resistive memories

In 2019, researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation. Their algorithm is based on in-memory computing with analog resistive memories which performs with high efficiencies of time and energy, via conducting matrix–vector multiplication in one step using Ohm's law and Kirchhoff's law. The researchers showed that a feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step. Such an approach improves computational times drastically in comparison with digital algorithms.63

Atomically thin semiconductors

In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).64 Such atomically thin semiconductors are considered promising for energy-efficient machine learning applications, where the same basic device structure is used for both logic operations and data storage. The authors used two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements.65

Integrated photonic tensor core

In 1988, Wei Zhang et al. discussed fast optical implementations of convolutional neural networks for alphabet recognition.6667 In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing.68 The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds.69 Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.70 Optical processors that can also perform backpropagation for artificial neural networks have been experimentally developed.71

Nomenclature

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",72 as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

All models of Intel Meteor Lake processors have a Versatile Processor Unit (VPU) built-in for accelerating inference for computer vision and deep learning.73

Deep learning processors (DLPs)

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. At ISCA 2016, three sessions (15%) of the accepted papers, focused on architecture designs about deep learning. Such efforts include Eyeriss (MIT),74 EIE (Stanford),75 Minerva (Harvard),76 Stripes (University of Toronto) in academia,77 TPU (Google),78 and MLU (Cambricon) in industry.79 We listed several representative works in Table 1.

Table 1. Typical DLPs
YearDLPsInstitutionTypeComputationMemory HierarchyControlPeak Performance
2014DianNao80ICT, CASdigitalvector MACsscratchpadVLIW452 Gops (16-bit)
DaDianNao81ICT, CASdigitalvector MACsscratchpadVLIW5.58 Tops (16-bit)
2015ShiDianNao82ICT, CASdigitalscalar MACsscratchpadVLIW194 Gops (16-bit)
PuDianNao83ICT, CASdigitalvector MACsscratchpadVLIW1,056 Gops (16-bit)
2016DnnWeaverGeorgia TechdigitalVector MACsscratchpad--
EIE84Stanforddigitalscalar MACsscratchpad-102 Gops (16-bit)
Eyeriss85MITdigitalscalar MACsscratchpad-67.2 Gops (16-bit)
Prime86UCSBhybridProcess-in-MemoryReRAM--
Orlando87STMicroelectronicsdigitalConvolution accelerator + DSPscratchpadRISC676 Gops (16 bits)
2017TPU88Googledigitalscalar MACsscratchpadCISC92 Tops (8-bit)
PipeLayer89U of PittsburghhybridProcess-in-MemoryReRAM-
FlexFlowICT, CASdigitalscalar MACsscratchpad-420 Gops ()
DNPU90KAISTdigitalscalar MACSscratchpad-300 Gops(16bit)

1200 Gops(4bit)

2018MAERIGeorgia Techdigitalscalar MACsscratchpad-
PermDNNCity University of New Yorkdigitalvector MACsscratchpad-614.4 Gops (16-bit)
UNPU91KAISTdigitalscalar MACsscratchpad-345.6 Gops(16bit)

691.2 Gops(8b)1382 Gops(4bit)7372 Gops(1bit)

2019FPSATsinghuahybridProcess-in-MemoryReRAM-
Cambricon-FICT, CASdigitalvector MACsscratchpadFISA14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs929394 or scalar MACs.959697 Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024 GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically.98 Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon99 introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue.100101102 Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing.103 Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM,104105106107 phase change memory,108109110 etc.

Benchmarks

Benchmarks such as MLPerf and others may be used to evaluate the performance of AI accelerators.111 Table 2 lists several typical benchmarks for AI accelerators.

Table 2. Benchmarks.
YearNN BenchmarkAffiliations# of microbenchmarks# of component benchmarks# of application benchmarks
2012BenchNNICT, CASN/A12N/A
2016FathomHarvardN/A8N/A
2017BenchIPICT, CAS1211N/A
2017DAWNBenchStanford8N/AN/A
2017DeepBenchBaidu4N/AN/A
2018AI BenchmarkETH ZurichN/A26N/A
2018MLPerfHarvard, Intel, and Google, etc.N/A7N/A
2019AIBenchICT, CAS and Alibaba, etc.12162
2019NNBench-XUCSBN/A10N/A

Potential applications

See also

References

  1. "Intel unveils Movidius Compute Stick USB AI Accelerator". July 21, 2017. Archived from the original on August 11, 2017. Retrieved August 11, 2017. https://web.archive.org/web/20170811193632/https://www.v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator

  2. "Inspurs unveils GX4 AI Accelerator". June 21, 2017. https://insidehpc.com/2017/06/inspurs-unveils-gx4-ai-accelerator/

  3. Wiggers, Kyle (November 6, 2019) [2019], Neural Magic raises $15 million to boost AI inferencing speed on off-the-shelf processors, archived from the original on March 6, 2020, retrieved March 14, 2020 https://web.archive.org/web/20200306120524/https://venturebeat.com/2019/11/06/neural-magic-raises-15-million-to-boost-ai-training-speed-on-off-the-shelf-processors/

  4. "Google Designing AI Processors". May 18, 2016. Google using its own AI accelerators. https://www.eetimes.com/google-designing-ai-processors/

  5. Moss, Sebastian (March 23, 2022). "Nvidia reveals new Hopper H100 GPU, with 80 billion transistors". Data Center Dynamics. Retrieved January 30, 2024. https://www.datacenterdynamics.com/en/news/nvidia-reveals-new-hopper-h100-gpu-with-80-billion-transistors/

  6. "HUAWEI Reveals the Future of Mobile AI at IFA". https://consumer.huawei.com/en/press/news/2017/ifa2017-kirin970

  7. "Intel's Lunar Lake Processors Arriving Q3 2024". Intel. May 20, 2024. https://www.intel.com/content/www/us/en/newsroom/news/intels-lunar-lake-processors-arriving-q3-2024.html

  8. "AMD XDNA Architecture". https://www.amd.com/en/technologies/xdna.html

  9. "Deploying Transformers on the Apple Neural Engine". Apple Machine Learning Research. Retrieved August 24, 2023. https://machinelearning.apple.com/research/neural-engine-transformers

  10. Jouppi, Norman P.; et al. (June 24, 2017). "In-Datacenter Performance Analysis of a Tensor Processing Unit". ACM SIGARCH Computer Architecture News. 45 (2): 1–12. arXiv:1704.04760. doi:10.1145/3140659.3080246. https://doi.org/10.1145%2F3140659.3080246

  11. "How silicon innovation became the 'secret sauce' behind AWS's success". Amazon Science. July 27, 2022. Retrieved July 19, 2024. https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success

  12. Patel, Dylan; Nishball, Daniel; Xie, Myron (November 9, 2023). "Nvidia's New China AI Chips Circumvent US Restrictions". SemiAnalysis. Retrieved February 7, 2024. https://www.semianalysis.com/p/nvidias-new-china-ai-chips-circumvent

  13. Dvorak, J.C. (May 29, 1990). "Inside Track". PC Magazine. Retrieved December 26, 2023. https://archive.org/details/PC_Magazine_1990_05_29_v9n10/page/n83/mode/2up

  14. "convolutional neural network demo from 1993 featuring DSP32 accelerator". YouTube. June 2, 2014. https://www.youtube.com/watch?v=FwFduRA_L6Q

  15. Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of Annual Conference of the Japan Society of Applied Physics.

  16. Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID 20577468. /wiki/Bibcode_(identifier)

  17. Asanović, K.; Beck, J.; Feldman, J.; Morgan, N.; Wawrzynek, J. (January 1994). "Designing a connectionist network supercomputer". International Journal of Neural Systems. 4 (4). ResearchGate: 317–26. doi:10.1142/S0129065793000250. PMID 8049794. Retrieved December 26, 2023. https://www.researchgate.net/publication/15149042

  18. "The end of general purpose computers (not)". YouTube. April 17, 2015. https://www.youtube.com/watch?v=VtJthbiiTBQ

  19. Gschwind, M.; Salapura, V.; Maischberger, O. (February 1995). "Space Efficient Neural Net Implementation". Retrieved December 26, 2023. https://www.researchgate.net/publication/2318589

  20. Gschwind, M.; Salapura, V.; Maischberger, O. (1996). "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning". 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96. pp. 49–52. doi:10.1109/ISCAS.1996.598474. ISBN 0-7803-3073-0. S2CID 17630664. 0-7803-3073-0

  21. Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (April 5, 2014). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi:10.1145/2654822.2541967. ISSN 0163-5964. https://doi.org/10.1145%2F2654822.2541967

  22. Chen, Yunji; Luo, Tao; Liu, Shaoli; Zhang, Shijin; He, Liqiang; Wang, Jia; Li, Ling; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (December 2014). "DaDianNao: A Machine-Learning Supercomputer". 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE. pp. 609–622. doi:10.1109/micro.2014.58. ISBN 978-1-4799-6998-2. S2CID 6838992. 978-1-4799-6998-2

  23. Du, Zidong; Fasthuber, Robert; Chen, Tianshi; Ienne, Paolo; Li, Ling; Luo, Tao; Feng, Xiaobing; Chen, Yunji; Temam, Olivier (January 4, 2016). "ShiDianNao". ACM SIGARCH Computer Architecture News. 43 (3S): 92–104. doi:10.1145/2872887.2750389. ISSN 0163-5964. /wiki/Doi_(identifier)

  24. Liu, Daofu; Chen, Tianshi; Liu, Shaoli; Zhou, Jinhong; Zhou, Shengyuan; Teman, Olivier; Feng, Xiaobing; Zhou, Xuehai; Chen, Yunji (May 29, 2015). "PuDianNao". ACM SIGARCH Computer Architecture News. 43 (1): 369–381. doi:10.1145/2786763.2694358. ISSN 0163-5964. /wiki/Doi_(identifier)

  25. Chen, Yunji; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (October 28, 2016). "DianNao family". Communications of the ACM. 59 (11): 105–112. doi:10.1145/2996864. ISSN 0001-0782. S2CID 207243998. /wiki/Doi_(identifier)

  26. "Qualcomm Helps Make Your Mobile Devices Smarter With New Snapdragon Machine Learning Software Development Kit". Qualcomm. https://www.qualcomm.com/news/releases/2016/05/02/qualcomm-helps-make-your-mobile-devices-smarter-new-snapdragon-machine

  27. Rubin, Ben Fox. "Qualcomm's Zeroth platform could make your smartphone much smarter". CNET. Retrieved September 28, 2021. https://www.cnet.com/tech/mobile/qualcomms-zeroth-platform-could-make-your-smartphone-much-smarter/

  28. Gschwind, Michael; Hofstee, H. Peter; Flachs, Brian; Hopkins, Martin; Watanabe, Yukio; Yamazaki, Takeshi (2006). "Synergistic Processing in Cell's Multicore Architecture". IEEE Micro. 26 (2): 10–24. doi:10.1109/MM.2006.41. S2CID 17834015. /wiki/Doi_(identifier)

  29. De Fabritiis, G. (2007). "Performance of Cell processor for biomolecular simulations". Computer Physics Communications. 176 (11–12): 660–664. arXiv:physics/0611201. Bibcode:2007CoPhC.176..660D. doi:10.1016/j.cpc.2007.02.107. S2CID 13871063. /wiki/ArXiv_(identifier)

  30. Video Processing and Retrieval on Cell architecture. CiteSeerX 10.1.1.138.5133. /wiki/CiteSeerX_(identifier)

  31. Benthin, Carsten; Wald, Ingo; Scherbaum, Michael; Friedrich, Heiko (2006). 2006 IEEE Symposium on Interactive Ray Tracing. pp. 15–23. CiteSeerX 10.1.1.67.8982. doi:10.1109/RT.2006.280210. ISBN 978-1-4244-0693-7. S2CID 1198101. 978-1-4244-0693-7

  32. "Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals" (PDF). Archived from the original (PDF) on August 30, 2017. Retrieved November 14, 2017. https://web.archive.org/web/20170830041003/http://www.teco.edu/~scholz/papers/ScholzDiploma.pdf

  33. Kwon, Bomjun; Choi, Taiho; Chung, Heejin; Kim, Geonho (2008). 2008 5th IEEE Consumer Communications and Networking Conference. pp. 1030–1034. doi:10.1109/ccnc08.2007.235. ISBN 978-1-4244-1457-4. S2CID 14429828. 978-1-4244-1457-4

  34. Duan, Rubing; Strey, Alfred (2008). Euro-Par 2008 – Parallel Processing. Lecture Notes in Computer Science. Vol. 5168. pp. 665–675. doi:10.1007/978-3-540-85451-7_71. ISBN 978-3-540-85450-0. 978-3-540-85450-0

  35. "Improving the performance of video with AVX". February 8, 2012. https://software.intel.com/content/www/us/en/develop/articles/improving-the-compute-performance-of-video-processing-software-using-avx-advanced-vector-extensions-instructions.html

  36. Chellapilla, K.; Sidd Puri; Simard, P. (October 23, 2006). "High Performance Convolutional Neural Networks for Document Processing". 10th International Workshop on Frontiers in Handwriting Recognition. Retrieved December 23, 2023. https://inria.hal.science/inria-00112631/document

  37. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. (May 24, 2017). "ImageNet Classification with Deep Convolutional Neural Networks". Communications of the ACM. 60 (6): 84–90. doi:10.1145/3065386. https://doi.org/10.1145%2F3065386

  38. Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E (May 24, 2017). "ImageNet classification with deep convolutional neural networks". Communications of the ACM. 60 (6): 84–90. doi:10.1145/3065386. https://doi.org/10.1145%2F3065386

  39. Roe, R. (May 17, 2023). "Nvidia in the Driver's Seat for Deep Learning". insideHPC. Retrieved December 23, 2023. https://insidehpc.com/2016/05/nvidia-driving-the-development-of-deep-learning

  40. Bohn, D. (January 5, 2016). "Nvidia announces 'supercomputer' for self-driving cars at CES 2016". Vox Media. Retrieved December 23, 2023. https://www.theverge.com/2016/1/4/10712634/nvidia-drive-px2-self-driving-car-supercomputer-announces-ces-2016

  41. "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019 https://www.researchgate.net/publication/329802520_A_Survey_on_Optimized_Implementation_of_Deep_Learning_Models_on_the_NVIDIA_Jetson_Platform

  42. Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017. https://developer.nvidia.com/blog/cuda-9-features-revealed/

  43. Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017. https://developer.nvidia.com/blog/cuda-9-features-revealed/

  44. "Summit: Oak Ridge National Laboratory's 200 petaflop supercomputer". United States Department of Energy. 2024. Retrieved January 8, 2024. https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit

  45. Sefat, Md Syadus; Aslan, Semih; Kellington, Jeffrey W; Qasem, Apan (August 2019). "Accelerating HotSpots in Deep Neural Networks on a CAPI-Based FPGA". 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). pp. 248–256. doi:10.1109/HPCC/SmartCity/DSS.2019.00048. ISBN 978-1-7281-2058-4. S2CID 203656070. 978-1-7281-2058-4

  46. Gschwind, M.; Salapura, V.; Maischberger, O. (February 1995). "Space Efficient Neural Net Implementation". Retrieved December 26, 2023. https://www.researchgate.net/publication/2318589

  47. Gschwind, M.; Salapura, V.; Maischberger, O. (1996). "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning". 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96. pp. 49–52. doi:10.1109/ISCAS.1996.598474. ISBN 0-7803-3073-0. S2CID 17630664. 0-7803-3073-0

  48. "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. August 23, 2016. Retrieved September 7, 2016. http://www.nextplatform.com/2016/08/23/fpga-based-deep-learning-accelerators-take-asics/

  49. "Microsoft unveils Project Brainwave for real-time AI". Microsoft. August 22, 2017. https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/

  50. "Google boosts machine learning with its Tensor Processing Unit". May 19, 2016. Retrieved September 13, 2016. https://techreport.com/news/30155/google-boosts-machine-learning-with-its-tensor-processing-unit/

  51. "Chip could bring deep learning to mobile devices". www.sciencedaily.com. February 3, 2016. Retrieved September 13, 2016. https://www.sciencedaily.com/releases/2016/02/160203134840.htm

  52. "Google Cloud announces the 5th generation of its custom TPUs". August 29, 2023. https://techcrunch.com/2023/08/29/google-cloud-announces-the-5th-generation-of-its-custom-tpus/

  53. "Deep Learning with Limited Numerical Precision" (PDF). http://proceedings.mlr.press/v37/gupta15.pdf

  54. Rastegari, Mohammad; Ordonez, Vicente; Redmon, Joseph; Farhadi, Ali (2016). "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks". arXiv:1603.05279 [cs.CV]. /wiki/ArXiv_(identifier)

  55. Lucian Armasu (May 23, 2018). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved May 23, 2018. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that's being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019. https://www.tomshardware.com/news/intel-neural-network-processor-lake-crest,37105.html

  56. Joshua V. Dillon; Ian Langmore; Dustin Tran; Eugene Brevdo; Srinivas Vasudevan; Dave Moore; Brian Patton; Alex Alemi; Matt Hoffman; Rif A. Saurous (November 28, 2017). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed May 23, 2018. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts /wiki/ArXiv_(identifier)

  57. Woodie, Alex (November 1, 2021). "Cerebras Hits the Accelerator for Deep Learning Workloads". Datanami. Retrieved August 3, 2022. https://www.datanami.com/2021/11/01/cerebras-hits-the-accelerator-for-deep-learning-workloads/

  58. "Cerebras launches new AI supercomputing processor with 2.6 trillion transistors". VentureBeat. April 20, 2021. Retrieved August 3, 2022. https://venturebeat.com/2021/04/20/cerebras-systems-launches-new-ai-supercomputing-processor-with-2-6-trillion-transistors/

  59. "AWS NeuronCore Architecture". readthedocs-hosted. December 27, 2024. Retrieved December 27, 2024. https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/neuroncores-arch.html

  60. Abu Sebastian; Tomas Tuma; Nikolaos Papandreou; Manuel Le Gallo; Lukas Kull; Thomas Parnell; Evangelos Eleftheriou (2017). "Temporal correlation detection using computational phase-change memory". Nature Communications. 8 (1): 1115. arXiv:1706.00511. Bibcode:2017NatCo...8.1115S. doi:10.1038/s41467-017-01481-9. PMC 5653661. PMID 29062022. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5653661

  61. "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. October 3, 2018. Retrieved October 5, 2018. https://phys.org/news/2018-10-brain-inspired-architecture-advance-ai.html

  62. Carlos Ríos; Nathan Youngblood; Zengguang Cheng; Manuel Le Gallo; Wolfram H.P. Pernice; C. David Wright; Abu Sebastian; Harish Bhaskaran (2018). "In-memory computing on a photonic platform". Science Advances. 5 (2): eaau5759. arXiv:1801.06228. Bibcode:2019SciA....5.5759R. doi:10.1126/sciadv.aau5759. PMC 6377270. PMID 30793028. S2CID 7637801. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6377270

  63. Zhong Sun; Giacomo Pedretti; Elia Ambrosi; Alessandro Bricalli; Wei Wang; Daniele Ielmini (2019). "Solving matrix equations in one step with cross-point resistive arrays". Proceedings of the National Academy of Sciences. 116 (10): 4123–4128. Bibcode:2019PNAS..116.4123S. doi:10.1073/pnas.1815682116. PMC 6410822. PMID 30782810. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6410822

  64. Marega, Guilherme Migliato; Zhao, Yanfei; Avsar, Ahmet; Wang, Zhenyu; Tripati, Mukesh; Radenovic, Aleksandra; Kis, Anras (2020). "Logic-in-memory based on an atomically thin semiconductor". Nature. 587 (2): 72–77. Bibcode:2020Natur.587...72M. doi:10.1038/s41586-020-2861-0. PMC 7116757. PMID 33149289. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116757

  65. Marega, Guilherme Migliato; Zhao, Yanfei; Avsar, Ahmet; Wang, Zhenyu; Tripati, Mukesh; Radenovic, Aleksandra; Kis, Anras (2020). "Logic-in-memory based on an atomically thin semiconductor". Nature. 587 (2): 72–77. Bibcode:2020Natur.587...72M. doi:10.1038/s41586-020-2861-0. PMC 7116757. PMID 33149289. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116757

  66. Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of Annual Conference of the Japan Society of Applied Physics.

  67. Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID 20577468. /wiki/Bibcode_(identifier)

  68. Feldmann, J.; Youngblood, N.; Karpov, M.; et al. (2021). "Parallel convolutional processing using an integrated photonic tensor". Nature. 589 (2): 52–58. arXiv:2002.00281. doi:10.1038/s41586-020-03070-1. PMID 33408373. S2CID 211010976. /wiki/ArXiv_(identifier)

  69. Feldmann, J.; Youngblood, N.; Karpov, M.; et al. (2021). "Parallel convolutional processing using an integrated photonic tensor". Nature. 589 (2): 52–58. arXiv:2002.00281. doi:10.1038/s41586-020-03070-1. PMID 33408373. S2CID 211010976. /wiki/ArXiv_(identifier)

  70. Feldmann, J.; Youngblood, N.; Karpov, M.; et al. (2021). "Parallel convolutional processing using an integrated photonic tensor". Nature. 589 (2): 52–58. arXiv:2002.00281. doi:10.1038/s41586-020-03070-1. PMID 33408373. S2CID 211010976. /wiki/ArXiv_(identifier)

  71. "Photonic Chips Curb AI Training's Energy Appetite - IEEE Spectrum". https://spectrum.ieee.org/backpropagation-optical-ai

  72. "NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256". Archived from the original on February 27, 2016. https://web.archive.org/web/20160227145622/http://www.nvidia.com/object/IO_20020111_5424.html

  73. "Intel to Bring a 'VPU' Processor Unit to 14th Gen Meteor Lake Chips". PCMAG. August 2022. https://www.pcmag.com/news/intel-to-bring-a-vpu-processor-unit-to-14th-gen-meteor-lake-chips

  74. Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2017). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". IEEE Micro: 1. doi:10.1109/mm.2017.265085944. hdl:1721.1/102369. ISSN 0272-1732. /wiki/Vivienne_Sze

  75. Han, Song; Liu, Xingyu; Mao, Huizi; Pu, Jing; Pedram, Ardavan; Horowitz, Mark A.; Dally, William J. (February 3, 2016). EIE: Efficient Inference Engine on Compressed Deep Neural Network. OCLC 1106232247. /wiki/OCLC_(identifier)

  76. Reagen, Brandon; Whatmough, Paul; Adolf, Robert; Rama, Saketh; Lee, Hyunkwang; Lee, Sae Kyu; Hernandez-Lobato, Jose Miguel; Wei, Gu-Yeon; Brooks, David (June 2016). "Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). Seoul: IEEE. pp. 267–278. doi:10.1109/ISCA.2016.32. ISBN 978-1-4673-8947-1. 978-1-4673-8947-1

  77. Judd, Patrick; Albericio, Jorge; Moshovos, Andreas (January 1, 2017). "Stripes: Bit-Serial Deep Neural Network Computing". IEEE Computer Architecture Letters. 16 (1): 80–83. doi:10.1109/lca.2016.2597140. ISSN 1556-6056. S2CID 3784424. /wiki/Doi_(identifier)

  78. Jouppi, N.; Young, C.; Patil, N.; Patterson, D. (June 24, 2017). In-Datacenter Performance Analysis of a Tensor Processing Unit. Association for Computing Machinery. pp. 1–12. doi:10.1145/3079856.3080246. ISBN 9781450348928. S2CID 4202768. 9781450348928

  79. "MLU 100 intelligence accelerator card" (in Japanese). Cambricon. 2024. Retrieved January 8, 2024. https://www.cambricon.com/index.php?m=content&c=index&a=lists&catid=21

  80. Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (April 5, 2014). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi:10.1145/2654822.2541967. ISSN 0163-5964. https://doi.org/10.1145%2F2654822.2541967

  81. Chen, Yunji; Luo, Tao; Liu, Shaoli; Zhang, Shijin; He, Liqiang; Wang, Jia; Li, Ling; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (December 2014). "DaDianNao: A Machine-Learning Supercomputer". 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE. pp. 609–622. doi:10.1109/micro.2014.58. ISBN 978-1-4799-6998-2. S2CID 6838992. 978-1-4799-6998-2

  82. Du, Zidong; Fasthuber, Robert; Chen, Tianshi; Ienne, Paolo; Li, Ling; Luo, Tao; Feng, Xiaobing; Chen, Yunji; Temam, Olivier (January 4, 2016). "ShiDianNao". ACM SIGARCH Computer Architecture News. 43 (3S): 92–104. doi:10.1145/2872887.2750389. ISSN 0163-5964. /wiki/Doi_(identifier)

  83. Liu, Daofu; Chen, Tianshi; Liu, Shaoli; Zhou, Jinhong; Zhou, Shengyuan; Teman, Olivier; Feng, Xiaobing; Zhou, Xuehai; Chen, Yunji (May 29, 2015). "PuDianNao". ACM SIGARCH Computer Architecture News. 43 (1): 369–381. doi:10.1145/2786763.2694358. ISSN 0163-5964. /wiki/Doi_(identifier)

  84. Han, Song; Liu, Xingyu; Mao, Huizi; Pu, Jing; Pedram, Ardavan; Horowitz, Mark A.; Dally, William J. (February 3, 2016). EIE: Efficient Inference Engine on Compressed Deep Neural Network. OCLC 1106232247. /wiki/OCLC_(identifier)

  85. Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2017). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". IEEE Micro: 1. doi:10.1109/mm.2017.265085944. hdl:1721.1/102369. ISSN 0272-1732. /wiki/Vivienne_Sze

  86. Chi, Ping; Li, Shuangchen; Xu, Cong; Zhang, Tao; Zhao, Jishen; Liu, Yongpan; Wang, Yu; Xie, Yuan (June 2016). "PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 27–39. doi:10.1109/isca.2016.13. ISBN 978-1-4673-8947-1. 978-1-4673-8947-1

  87. Desoli, Giuseppe; Chawla, Nitin; Boesch, Thomas; Singh, Surinder-pal; Guidetti, Elio; De Ambroggi, Fabio; Majo, Tommaso; Zambotti, Paolo; Ayodhyawasi, Manuj; Singh, Harvinder; Aggarwal, Nalin (February 5, 2017). "14.1 a 2.9TOPS/W deep convolutional neural network SoC in FD-SOI 28nm for intelligent embedded systems". 2017 IEEE International Solid-State Circuits Conference (ISSCC). IEEE. pp. 238–239. doi:10.1109/ISSCC.2017.7870349. ISBN 978-1-5090-3758-2 – via IEEEXplore. 978-1-5090-3758-2

  88. Jouppi, N.; Young, C.; Patil, N.; Patterson, D. (June 24, 2017). In-Datacenter Performance Analysis of a Tensor Processing Unit. Association for Computing Machinery. pp. 1–12. doi:10.1145/3079856.3080246. ISBN 9781450348928. S2CID 4202768. 9781450348928

  89. Song, Linghao; Qian, Xuehai; Li, Hai; Chen, Yiran (February 2017). "PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning". 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE. pp. 541–552. doi:10.1109/hpca.2017.55. ISBN 978-1-5090-4985-1. S2CID 15281419. 978-1-5090-4985-1

  90. Shin, Dongjoo; Lee, Jinmook; Lee, Jinsu; Yoo, Hoi-Jun (2017). "14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks". 2017 IEEE International Solid-State Circuits Conference (ISSCC). pp. 240–241. doi:10.1109/ISSCC.2017.7870350. ISBN 978-1-5090-3758-2. S2CID 206998709. Retrieved August 24, 2023. 978-1-5090-3758-2

  91. Lee, Jinmook; Kim, Changhyeon; Kang, Sanghoon; Shin, Dongjoo; Kim, Sangyeob; Yoo, Hoi-Jun (2018). "UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision". 2018 IEEE International Solid - State Circuits Conference - (ISSCC). pp. 218–220. doi:10.1109/ISSCC.2018.8310262. ISBN 978-1-5090-4940-0. S2CID 3861747. Retrieved November 30, 2023. 978-1-5090-4940-0

  92. Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (April 5, 2014). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi:10.1145/2654822.2541967. ISSN 0163-5964. https://doi.org/10.1145%2F2654822.2541967

  93. Chen, Yunji; Luo, Tao; Liu, Shaoli; Zhang, Shijin; He, Liqiang; Wang, Jia; Li, Ling; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (December 2014). "DaDianNao: A Machine-Learning Supercomputer". 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE. pp. 609–622. doi:10.1109/micro.2014.58. ISBN 978-1-4799-6998-2. S2CID 6838992. 978-1-4799-6998-2

  94. Liu, Daofu; Chen, Tianshi; Liu, Shaoli; Zhou, Jinhong; Zhou, Shengyuan; Teman, Olivier; Feng, Xiaobing; Zhou, Xuehai; Chen, Yunji (May 29, 2015). "PuDianNao". ACM SIGARCH Computer Architecture News. 43 (1): 369–381. doi:10.1145/2786763.2694358. ISSN 0163-5964. /wiki/Doi_(identifier)

  95. Jouppi, N.; Young, C.; Patil, N.; Patterson, D. (June 24, 2017). In-Datacenter Performance Analysis of a Tensor Processing Unit. Association for Computing Machinery. pp. 1–12. doi:10.1145/3079856.3080246. ISBN 9781450348928. S2CID 4202768. 9781450348928

  96. Du, Zidong; Fasthuber, Robert; Chen, Tianshi; Ienne, Paolo; Li, Ling; Luo, Tao; Feng, Xiaobing; Chen, Yunji; Temam, Olivier (January 4, 2016). "ShiDianNao". ACM SIGARCH Computer Architecture News. 43 (3S): 92–104. doi:10.1145/2872887.2750389. ISSN 0163-5964. /wiki/Doi_(identifier)

  97. Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2017). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". IEEE Micro: 1. doi:10.1109/mm.2017.265085944. hdl:1721.1/102369. ISSN 0272-1732. /wiki/Vivienne_Sze

  98. Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (April 5, 2014). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi:10.1145/2654822.2541967. ISSN 0163-5964. https://doi.org/10.1145%2F2654822.2541967

  99. Liu, Shaoli; Du, Zidong; Tao, Jinhua; Han, Dong; Luo, Tao; Xie, Yuan; Chen, Yunji; Chen, Tianshi (June 2016). "Cambricon: An Instruction Set Architecture for Neural Networks". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 393–405. doi:10.1109/isca.2016.42. ISBN 978-1-4673-8947-1. 978-1-4673-8947-1

  100. Song, Linghao; Qian, Xuehai; Li, Hai; Chen, Yiran (February 2017). "PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning". 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE. pp. 541–552. doi:10.1109/hpca.2017.55. ISBN 978-1-5090-4985-1. S2CID 15281419. 978-1-5090-4985-1

  101. Ambrogio, Stefano; Narayanan, Pritish; Tsai, Hsinyu; Shelby, Robert M.; Boybat, Irem; di Nolfo, Carmelo; Sidler, Severin; Giordano, Massimo; Bodini, Martina; Farinha, Nathan C. P.; Killeen, Benjamin (June 2018). "Equivalent-accuracy accelerated neural-network training using analogue memory". Nature. 558 (7708): 60–67. Bibcode:2018Natur.558...60A. doi:10.1038/s41586-018-0180-5. ISSN 0028-0836. PMID 29875487. S2CID 46956938. /wiki/Bibcode_(identifier)

  102. Chen, Wei-Hao; Lin, Wen-Jang; Lai, Li-Ya; Li, Shuangchen; Hsu, Chien-Hua; Lin, Huan-Ting; Lee, Heng-Yuan; Su, Jian-Wei; Xie, Yuan; Sheu, Shyh-Shyuan; Chang, Meng-Fan (December 2017). "A 16Mb dual-mode ReRAM macro with sub-14ns computing-in-memory and memory functions enabled by self-write termination scheme". 2017 IEEE International Electron Devices Meeting (IEDM). IEEE. pp. 28.2.1–28.2.4. doi:10.1109/iedm.2017.8268468. ISBN 978-1-5386-3559-9. S2CID 19556846. 978-1-5386-3559-9

  103. Yang, J. Joshua; Strukov, Dmitri B.; Stewart, Duncan R. (January 2013). "Memristive devices for computing". Nature Nanotechnology. 8 (1): 13–24. Bibcode:2013NatNa...8...13Y. doi:10.1038/nnano.2012.240. ISSN 1748-3395. PMID 23269430. https://www.nature.com/articles/nnano.2012.240

  104. Chi, Ping; Li, Shuangchen; Xu, Cong; Zhang, Tao; Zhao, Jishen; Liu, Yongpan; Wang, Yu; Xie, Yuan (June 2016). "PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 27–39. doi:10.1109/isca.2016.13. ISBN 978-1-4673-8947-1. 978-1-4673-8947-1

  105. Shafiee, Ali; Nag, Anirban; Muralimanohar, Naveen; Balasubramonian, Rajeev; Strachan, John Paul; Hu, Miao; Williams, R. Stanley; Srikumar, Vivek (October 12, 2016). "ISAAC". ACM SIGARCH Computer Architecture News. 44 (3): 14–26. doi:10.1145/3007787.3001139. ISSN 0163-5964. S2CID 6329628. /wiki/Doi_(identifier)

  106. Ji, Yu Zhang, Youyang Xie, Xinfeng Li, Shuangchen Wang, Peiqi Hu, Xing Zhang, Youhui Xie, Yuan (January 27, 2019). FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture. OCLC 1106329050.{{cite book}}: CS1 maint: multiple names: authors list (link) /wiki/OCLC_(identifier)

  107. Song, Linghao; Qian, Xuehai; Li, Hai; Chen, Yiran (February 2017). "PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning". 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE. pp. 541–552. doi:10.1109/hpca.2017.55. ISBN 978-1-5090-4985-1. S2CID 15281419. 978-1-5090-4985-1

  108. Ambrogio, Stefano; Narayanan, Pritish; Tsai, Hsinyu; Shelby, Robert M.; Boybat, Irem; di Nolfo, Carmelo; Sidler, Severin; Giordano, Massimo; Bodini, Martina; Farinha, Nathan C. P.; Killeen, Benjamin (June 2018). "Equivalent-accuracy accelerated neural-network training using analogue memory". Nature. 558 (7708): 60–67. Bibcode:2018Natur.558...60A. doi:10.1038/s41586-018-0180-5. ISSN 0028-0836. PMID 29875487. S2CID 46956938. /wiki/Bibcode_(identifier)

  109. Nandakumar, S. R.; Boybat, Irem; Joshi, Vinay; Piveteau, Christophe; Le Gallo, Manuel; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (November 2019). "Phase-Change Memory Models for Deep Learning Training and Inference". 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE. pp. 727–730. doi:10.1109/icecs46596.2019.8964852. ISBN 978-1-7281-0996-1. S2CID 210930121. 978-1-7281-0996-1

  110. Joshi, Vinay; Le Gallo, Manuel; Haefeli, Simon; Boybat, Irem; Nandakumar, S. R.; Piveteau, Christophe; Dazzi, Martino; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (May 18, 2020). "Accurate deep neural network inference using computational phase-change memory". Nature Communications. 11 (1): 2473. arXiv:1906.03138. Bibcode:2020NatCo..11.2473J. doi:10.1038/s41467-020-16108-9. ISSN 2041-1723. PMC 7235046. PMID 32424184. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7235046

  111. "Nvidia claims 'record performance' for Hopper MLPerf debut". https://www.theregister.com/2022/09/09/nvidia_hopper_mlperf/

  112. "Development of a machine vision system for weed control using precision chemical application" (PDF). University of Florida. CiteSeerX 10.1.1.7.342. Archived from the original (PDF) on June 23, 2010. https://web.archive.org/web/20100623062608/http://www.abe.ufl.edu/wlee/Publications/ICAME96.pdf

  113. "Self-Driving Cars Technology & Solutions from NVIDIA Automotive". NVIDIA. https://www.nvidia.com/en-us/self-driving-cars/

  114. "movidius powers worlds most intelligent drone". March 16, 2016. https://www.siliconrepublic.com/machines/movidius-dji-drone

  115. "Qualcomm Research brings server class machine learning to everyday devices–making them smarter [VIDEO]". October 2015. https://www.qualcomm.com/news/onq/2015/10/01/qualcomm-research-brings-server-class-machine-learning-everyday-devices-making