AdAM: Adaptive Approximate Multiplier for Fault Tolerance in DNN Accelerators
AdAM: Adaptive Approximate Multiplier for Fault Tolerance in DNN Accelerators
Abstract:
Deep Neural Network (DNN) hardware accelerators are essential in a spectrum of safety-critical edge-AI applications with stringent reliability, energy efficiency, and latency requirements. Multiplication is the most resource-hungry operation in the neural network’s processing elements. This paper proposes a scalable adaptive fault-tolerant approximate multiplier (AdAM) tailored for ASIC-based DNN accelerators at the algorithm and circuit levels. AdAM employs an adaptive adder that relies on an unconventional use of input Leading One Detector (LOD) values for fault detection by optimizing unutilized adder resources. A gate-level optimized LOD design and a hybrid adder design are also proposed as a part of the adaptive multiplier to improve hardware performance. The proposed architecture uses a lightweight fault mitigation technique that sets the detected faulty bits to zero. The hardware resource utilization and the DNN accelerator’s reliability metrics are used to compare the proposed solution against the Triple Modular Redundancy (TMR) in multiplication, unprotected exact multiplication, and unprotected approximate multiplication. It is demonstrated that the proposed architecture enables a multiplication with a reliability level close to exact multipliers protected by TMR while at the same time utilizing 20.7% less area and with more than 40% less energy consumption compared to the exact multiplier. Moreover, it has similar area, delay, and power consumption characteristics compared with state-of-the-art approximate multipliers with similar accuracy while providing fault detection and mitigation capability.
Index Terms — Deep neural networks, approximate computing, circuit design, reliability, DNN accelerator.
” Thanks for Visit this project Pages – Register This Project and Buy soon with Novelty “