0% Complete
Home
/
13th International Conference on Computer and Knowledge Engineering
Enhancing Lighter Neural Network Performance with Layer-wise Knowledge Distillation and Selective Pixel Attention
Authors :
Siavash Zaravashan
1
Sajjad Torabi
2
Hesam Zaravashan
3
1- Iran University of Science and Technology
2- Part AI Research Center
3- Iran University of Science and Technology
Keywords :
Knowledge distillation،Image classification،Deep learning،Model compression،Computer vision
Abstract :
Deep neural networks have made a revolution in different areas of Artificial Intelligence such as image classification and object detection. However, most of these models are heavy and time-consuming at the inference stage. If we desire to use real-time applications, it's necessary to lighten them up. However, this could negatively affect the performance of models. One suggested approach for enhancing the capabilities of smaller models involves a technique called knowledge distillation. In this method, a less advanced model, referred to as the student model, learns from and emulates the more advanced teacher model to enhance its performance. This method has been proven as an effective technique to compress the model and increase its accuracy. Previous methods mostly focus on proposing feature transformation and loss functions between the same level’s features to improve the effectiveness. In this paper, we investigate the effect of connections between different layers of teacher and student networks and reveal their great importance. We have employed three different loss functions—attention-driven distillation, hierarchical context loss, and non-local loss—to enhance the classifier's overall performance. Extensive experiments on the CIFAR-100 dataset show that the proposed method has a great performance improvement. For the benefit of the research community, we will make the code for this study available on GitHub.
Papers List
List of archived papers
A New Time Series Approach in Churn Prediction with Discriminatory Intervals
Hedieh Ahmadi - Seyed Mohammad Hossein Hasheminejad
Delay Optimization of a Federated Learning-based UAV-aided IoT network
Hossein Mohammadi Firouzjaei - Javad Zeraatkar Moghaddam - Mehrdad Ardebilipour
Developing Convolutional Neural Networks using a Novel Lamarckian Co-Evolutionary Algorithm
Zaniar Sharifi - Khabat Soltanian - Ali Amiri
Intelligent Rule Extraction in Complex Event Processing Platform for Health Monitoring Systems
Mohammad Mehdi Naseri - Shima Tabibian - Elaheh Homayounvala
Cloud Service Composition Using Genetic Algorithm and Particle Swarm Optimization
Javad Dogani - Farshad Khunjush
Classification of COVID-19 and Nodule in CT Images using Deep Convolutional Neural Network
Amirhossein Ghaemi - Seyyed Amir Mousavi mobarakeh - Habibollah Danyali - Kamran Kazemi
Deep Learning-Based Malaysian Sign Language (MSL) Recognition: Exploring the Impact of Color Spaces
Ervin Gubin Moung - Precilla Fiona Suwek - Maisarah Mohd Sufian - Valentino Liaw - Ali Farzamnia - Wei Leong Khong
Lightweight Local Transformer for COVID-19 Detection Using Chest CT Scans
Hojat Asgarian Dehkordi - Hossein Kashiani - Amir Abbas Hamidi Imani - Shahriar Baradaran Shokouhi
Enhancing Persian Word Sense Disambiguation with Large Language Models: Techniques and Applications
Fatemeh Zahra Arshia - Saeedeh Sadat Sadidpour
Fatty Liver Level Recognition Using Particle Swarm Optimization (PSO) Image Segmentation and Analysis
Seyed Muhammad Hossein Mousavi - Vyacheslav Lyashenko - Atiye Ilanloo - S. Younes Mirinezhad
more
Samin Hamayesh - Version 42.2.1