![A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (유회준교수 연구실) - KAIST 전기 및 전자공학부 A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (유회준교수 연구실) - KAIST 전기 및 전자공학부](https://ee.kaist.ac.kr/wp-content/uploads/2022/08/%EC%9C%A0%ED%9A%8C%EC%A4%8010.png)
A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (유회준교수 연구실) - KAIST 전기 및 전자공학부
![A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm | Research A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm | Research](https://research.nvidia.com/sites/default/files/styles/wide/public/publications/RC18_photo_0_0.jpg?itok=ubFiJEaV)
A 0.32–128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm | Research
![PDF) BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W PDF) BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W](https://i1.rgstatic.net/publication/321930684_BRein_Memory_A_Single-Chip_BinaryTernary_Reconfigurable_in-Memory_Deep_Neural_Network_Accelerator_Achieving_14_TOPS_at_06_W/links/5b4f2d930f7e9b240fe98667/largepreview.png)
PDF) BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W
![Electronics | Free Full-Text | Accelerating Neural Network Inference on FPGA-Based Platforms—A Survey Electronics | Free Full-Text | Accelerating Neural Network Inference on FPGA-Based Platforms—A Survey](https://www.mdpi.com/electronics/electronics-10-01025/article_deploy/html/images/electronics-10-01025-g001.png)
Electronics | Free Full-Text | Accelerating Neural Network Inference on FPGA-Based Platforms—A Survey
![Figure 5 from Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | Semantic Scholar Figure 5 from Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/e66b097ed6305f4b9f93141759c2aa7a20e4dcdf/2-Figure5-1.png)
Figure 5 from Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | Semantic Scholar
![Figure 1 from A 3.43TOPS/W 48.9pJ/pixel 50.1nJ/classification 512 analog neuron sparse coding neural network with on-chip learning and classification in 40nm CMOS | Semantic Scholar Figure 1 from A 3.43TOPS/W 48.9pJ/pixel 50.1nJ/classification 512 analog neuron sparse coding neural network with on-chip learning and classification in 40nm CMOS | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/a2e283532b71e9b6af7addb3b3f4f4a1af6e0fb4/2-Figure1-1.png)
Figure 1 from A 3.43TOPS/W 48.9pJ/pixel 50.1nJ/classification 512 analog neuron sparse coding neural network with on-chip learning and classification in 40nm CMOS | Semantic Scholar
![A 17–95.6 TOPS/W Deep Learning Inference Accelerator with Per-Vector Scaled 4-bit Quantization for Transformers in 5nm | Research A 17–95.6 TOPS/W Deep Learning Inference Accelerator with Per-Vector Scaled 4-bit Quantization for Transformers in 5nm | Research](https://research.nvidia.com/sites/default/files/styles/wide/public/publications/Figure5.png?itok=egEetJGg)