0% Complete
Home
/
11th International Conference on Computer and Knowledge Engineering
Lightweight Local Transformer for COVID-19 Detection Using Chest CT Scans
Authors :
Hojat Asgarian Dehkordi
1
Hossein Kashiani
2
Amir Abbas Hamidi Imani
3
Shahriar Baradaran Shokouhi
4
1- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
2- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
3- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
4- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
Keywords :
COVID-19 Diagnosis, Vision Transformer, Limited dataset, Locality, Long-range Dependencies
Abstract :
As COVID-19 spreads around the globe, many studies have leveraged Convolutional Neural Networks (CNNs) for automated diagnosis of COVID-19 by means of CT images. However, CNNs have mainly failed to explicitly model long-range dependencies, primarily because of their intrinsic locality. To address this issue, Transformers have drawn increasing interest in exploiting long-range dependencies among input data. In this study, we aim to enjoy the merits of both local and global feature extractions in CNN and Transformer architectures. To this end, we go beyond the conventional Transformer frameworks and introduce a highly efficient Transformer architecture for early diagnosis and treatment of COVID-19 patients using CT images. Unlike conventional data-hungry Transformers, our model relaxes the requirement of large-scale training data in vision Transformers and attains on-par or even better performance than the state-of-the-art studies. This flexibility empowers our Transformer architecture to be exploited in data-scarce domains such as medical image analysis. Moreover, we tailor our Transformer architecture in two ways to embody the principle of locality, which once belonged to CNNs. First, we minimally inject convolutional inductive bias into the early blocks of our Transformer architecture and eliminate standard image patching in the vanilla Transformers. Second, unlike typical patch integration in the standard Transformers, we benefit from a deformable convolution in our architecture to adaptively attend to a small set of key features corresponding to nearby patches. Extensive experiential evaluations verify that our Transformer architecture surpasses its counterparts, advances the COVID-19 diagnosis by modeling intrinsic locality of CNNs, alleviates the computational complexity of Transformer architectures, and deals with the lack of large-scale training dataset for COVID-19 diagnosis.
Papers List
List of archived papers
A Graph-based Feature Selection using Class-Feature Association Map (CFAM)
Motahare Akhavan - Seyed Mohammad Hossein Hasheminejad
A Simple Low Cost Approach to Detect Hand Gesture Based on Software Event Camera Emulation
Ali Sabet Akbarzadeh - Abedin Vahedian
Towards Efficient Capsule Networks through Approximate Squash Function and Layer-wise Quantization
Mohsen Raji - Kimia Soroush - Amir Ghazizadeh
Impossible differential and zero-correlatin linear cryptanalysis of Marx, Marx2, Chaskey andSpeck32
Mahshid Saberi - Nasour Bagheri - Sadegh Sadeghi
SASIAF, An Scalable Accelerator For Seismic Imaging on Amazon AWS FPGAs
Mostafa Koraei - S.Omid Fatemi
A Formalism for Specifying Capability-based Task Allocation in MAS
Samaneh HoseinDoost - Bahman Zamani - Afsaneh Fatemi
A Cloud Broker with Gap Analysis Perspective for Scheduling Multi-Workflows Across On-Demand and Reserved Resources
Negin Shafinezhad - Hamidreza Abrishami - Saeid Abrishami
Towards Efficient Video Object Detection on Embedded Devices
Mohammad Hajizadeh - Adel Rahmani - Mohammad Sabokrou
Real-Time Gender Recognition with a Deep Neural Network
Samad Azimi Abriz - Majid Meghdadi
GroupRec: Group Recommendation by Numerical Characteristics of Groups in Telegram
Davod Karimpour - Mohammad Ali Zare Chahooki - Ali Hashemi
more
Samin Hamayesh - Version 42.4.1