Deep learning models usually require large memory space and computation power for training on vast size datasets. This costly overhead creates a bottleneck in the research and development of neural network models. Efficient and resourceful deep learning models that work on the balance between computation cost and performance are required. Given these scenarios of less performance improvement over exponential computational cost increase, we have implemented two shallow neural network architectures based on ResNet. One is the naive version of the ResNet model; keeping its main design features and configuration. The second model uses a combination of residual and non-residual blocks primarily for effective feature learning and computation efficiency. We compare and evaluate these architectures performance in a rather simple environment. We found that simple Resnet in less number of layers is computationally expensive for small datasets. Performance improvement is observed with our custom architecture of ResNet. This new approach can be implemented for larger-scale neural networks for achieving computation efficiency.
|Title of host publication
|ICTC 2021 - 12th International Conference on ICT Convergence
|Subtitle of host publication
|"Beyond the Pandemic Era with ICT Convergence Innovation"
|IEEE Computer Society
|Number of pages
|Published - 7 Dec 2021
|12th International Conference on Information and Communication Technology Convergence - Jeju Island, Korea, Republic of
Duration: 20 Oct 2021 → 22 Oct 2021
Conference number: 12
|International Conference on ICT Convergence
|12th International Conference on Information and Communication Technology Convergence
|Korea, Republic of
|20/10/21 → 22/10/21