A Survey on Different Strategies for Hardware Implementation of Activation Functions
Activation functions (AFs) are fundamental components of neural networks (NNs), introducing nonlinearity that enables models to learn complex representations and decision boundaries. With the growing adoption of NNs in real-time and embedded systems, there is an increasing demand for efficient hardware implementations. Implementing AFs on hardware poses significant challenges due to the presence of nonlinear operations and transcendental functions, especially exponential computations required by commonly used AFs such as sigmoid, hyperbolic tangent, and softmax. Efficient approximation and computation techniques for exponential functions (EFs) are essential to enable practical and high-performance hardware realizations of AFs. This paper presents a comprehensive survey of techniques for implementing AFs on digital hardware. The contributions include a concise review of widely used and emerging AFs, a survey of major EF computation methods such as lookup tables, polynomial and piecewise approximations, COordinate Rotation DIgital Compute (CORDIC) based approaches, and iterative algorithms, and a detailed examination of hardware-oriented implementation strategies for AFs. The surveyed techniques are analyzed in terms of accuracy, resource utilization, latency, and power efficiency. This survey aims to provide valuable insights and design guidelines for researchers and practitioners developing efficient NN accelerators on field gate programmable array (FPGA) and other digital hardware platforms.