In deep learning models, the inputs to the network are processed using activation functions to generate the output corresponding to these inputs. Deep learning models are of particular importance in analyzing big data with numerous parameters and forecasting and are useful for image processing, natural language processing, object recognition, and financial forecasting. Also, in deep learning algorithms, activation functions have been developed by taking into account features such as performing the learning process in a healthy way, preventing excessive learning, increasing the accuracy performance, and reducing the computational cost. In this study, we present an overview of common and current activation functions used in deep learning algorithms. In the study, fixed and trainable activation functions are introduced. As fixed activation functions, sigmoid, hyperbolic tangent, ReLU, softplus and swish, and as trainable activation functions, LReLU, ELU, SELU and RSigELU are introduced.
Birincil Dil | İngilizce |
---|---|
Konular | Mühendislik |
Bölüm | Articles |
Yazarlar | |
Yayımlanma Tarihi | 31 Aralık 2021 |
Yayımlandığı Sayı | Yıl 2021 |
As of 2021, JNRS is licensed under a Creative Commons Attribution-NonCommercial 4.0 International Licence (CC BY-NC).