PATHOLOGICAL OUTCOMES OF COVID-19
FOR LUNGS INFECTIONS BASED ON
TRANSFER LEARNING TECHNOLOGY
Omar Alsaif
Computer Systems Dept., Northern Technical University, Mosul, Iraq
Mohammed L. Muammer*
mohammed.loay@ntu.edu.iq
Reception: 13/11/2022 Acceptance: 07/01/2023 Publication: 25/01/2023
Suggested citation:
A., Omar and L. M., Mohammed. (2023). Pathological outcomes of covid-19
for lungs infections based on transfer learning technology. 3C Tecnología.
Glosas de innovación aplicada a la pyme, 12(1), 282-294. https://doi.org/
10.17993/3ctecno.2023.v12n1e43.282-294
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
282
PATHOLOGICAL OUTCOMES OF COVID-19
FOR LUNGS INFECTIONS BASED ON
TRANSFER LEARNING TECHNOLOGY
Omar Alsaif
Computer Systems Dept., Northern Technical University, Mosul, Iraq
Mohammed L. Muammer*
mohammed.loay@ntu.edu.iq
Reception: 13/11/2022 Acceptance: 07/01/2023 Publication: 25/01/2023
Suggested citation:
A., Omar and L. M., Mohammed. (2023). Pathological outcomes of covid-19
for lungs infections based on transfer learning technology. 3C Tecnología.
Glosas de innovación aplicada a la pyme, 12(1), 282-294. https://doi.org/
10.17993/3ctecno.2023.v12n1e43.282-294
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
ABSTRACT
In 2019 a new Syndrome appear on the Large numbers of people like (High
temperature, cough, Loss of sense of smell and taste(forcing a lot of them to enter the
critical care unit after while the virus how case this syndrome named (SARS-CoV2).
The aim of this paper is recognize the patient who effected by covid-19 or not using x-
ray images. Deep learning techniques utilized to classify these images by using
convolutional neural network (CNN). The dataset have been utilized in this work
consist of 1000 x-ray images collected from kaggle website and divided it into 80% for
training and 20% for validation.
The proposed method using the pertained networks such as (EffienentNet B0,
ResNet50) to minimize the training time with high performance, where the
EffienentNet B0 network give high accuracy is 98.5%,finaly the model has been
implemented on raspberry pi3 successfully for classification task.
KEYWORDS
Covid19; Deep learning; CNN.
PAPER INDEX
ABSTRACT
KEYWORDS
1. INTRODUCTION
2. MATERIALS AND METHOD
2.1. CNN TECHNIQUE
3. TRANSFER LEARNING
3.1. EFFICIENT NET B0 TECHNIQUE
3.2. RESNET50 TECHNIQUE
4. RASPBERRY PI 3 SYSTEM
5. METHODOLOGY
6. RESULTS AND DISCUSSION
7. CONCLUSIONS
8. ACKNOWLEDGMENT
REFERENCES
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
283
1. INTRODUCTION
Since late December 2019, a new coronavirus illness (COVID-19; previously
known as 2019-nCoV) epidemic has been detected in Wuhan, China, affecting 26
nations across the world. COVID-19 is a condition that is acutely resolved in most
cases, but it can potentially be fatal, with a case fatality rate of 2%. Massive alveolar
destruction and gradual respiratory failure may end in mortality if the condition is
severe enough[1].
Figure 1. X-ray images for Chest of the patient over 50-year-old COVID-19 with pneumonia
[2]
As shows in Fig. 1 the x-ray can appear the development of the lungs disease in
seven days, in the first day the lung is clear but in the days-4 the illness patchy appear
in the x-ray, while in the days-7 the patient will be in the worst case.
Vruddhi Shah et al in 9dec2020. COVID-19 of CT scan pictures was diagnosed
using deep learning. A convolutional neural network is utilized in the deep learning
techniques (CNN). The dataset contains There are 738 CT scan images total, 349 of
which are for the COVID-19 case and 463 are for a different patient. For the
COVID-19 diagnostic, they built a self-made model called CTnet-10, which had an
accuracy of 82.1%. Other models used in this study are InceptionV3, ResNet-50,
VGG-16, DenseNet-169, and VGG-19; with an accuracy of 94.52%, the VGG-19
model outperformed all other deep learning models. [3][4] Nesreen Alsharman and
Ibrahim Jawarneh in 11apr2020, COVID-19 was detected using a transfer learning
method. Only GoogleNet CNN has been used. Dataset comprises 349 photos
showing COVID-19 medical studies in this investigation. Retraining GoogleNet has a
validation accuracy of 82.14 percent[5].
Halgurd S. Maghdid et al. in 12apr2021. From CT and x- ray images, DL was
utilized to identify COVID-19 Pneumonia in the chest. The images are processed
using a standard convolution neural network (CNN) and a specially designed pre-
trained AlexNet model. They used a total of 238 samples (85 x-ray and 153 CT scan
images). According to the tests, the models can achieve accuracy rate more than 98
% when using a pre-trained models and 94.1% when using an other CNN[6].
2. MATERIALS AND METHOD
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
284
1. INTRODUCTION
Since late December 2019, a new coronavirus illness (COVID-19; previously
known as 2019-nCoV) epidemic has been detected in Wuhan, China, affecting 26
nations across the world. COVID-19 is a condition that is acutely resolved in most
cases, but it can potentially be fatal, with a case fatality rate of 2%. Massive alveolar
destruction and gradual respiratory failure may end in mortality if the condition is
severe enough[1].
Figure 1. X-ray images for Chest of the patient over 50-year-old COVID-19 with pneumonia
[2]
As shows in Fig. 1 the x-ray can appear the development of the lungs disease in
seven days, in the first day the lung is clear but in the days-4 the illness patchy appear
in the x-ray, while in the days-7 the patient will be in the worst case.
Vruddhi Shah et al in 9dec2020. COVID-19 of CT scan pictures was diagnosed
using deep learning. A convolutional neural network is utilized in the deep learning
techniques (CNN). The dataset contains There are 738 CT scan images total, 349 of
which are for the COVID-19 case and 463 are for a different patient. For the
COVID-19 diagnostic, they built a self-made model called CTnet-10, which had an
accuracy of 82.1%. Other models used in this study are InceptionV3, ResNet-50,
VGG-16, DenseNet-169, and VGG-19; with an accuracy of 94.52%, the VGG-19
model outperformed all other deep learning models. [3][4] Nesreen Alsharman and
Ibrahim Jawarneh in 11apr2020, COVID-19 was detected using a transfer learning
method. Only GoogleNet CNN has been used. Dataset comprises 349 photos
showing COVID-19 medical studies in this investigation. Retraining GoogleNet has a
validation accuracy of 82.14 percent[5].
Halgurd S. Maghdid et al. in 12apr2021. From CT and x- ray images, DL was
utilized to identify COVID-19 Pneumonia in the chest. The images are processed
using a standard convolution neural network (CNN) and a specially designed pre-
trained AlexNet model. They used a total of 238 samples (85 x-ray and 153 CT scan
images). According to the tests, the models can achieve accuracy rate more than 98
% when using a pre-trained models and 94.1% when using an other CNN[6].
2. MATERIALS AND METHOD
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
Detecting structural anomalies and disease categorization are two common uses of
Deep learning (DL) in radiology. (CNNs) in particular have been found to be very
effective at detects anomalies and diseases in chest X-ray imaging[7]. The human
nervous system provided inspiration for deep learning models. DL has been shown to
improve performance in a variety of fields[8].
2.1. CNN TECHNIQUE
A restricted resource budget is typically used to build CNN, which are subsequently
scaled up for greater accuracy when additional resources become available. [9]. Deep
CNN is now one of the most popular models, with excellent results on a variety of
image categorization challenges. By uncovering robust characteristics (features) in
images and reducing the vanishing gradient problem, the notion of sharing weights in
DCNN allows for successful image categorization[10].
Convolution, pooling, and fully connected layers make up CNN's three layers. The
convolutional layer's primary objective, which is accomplished via the use of filters, is
the extraction of characteristics (features) from input pictures. The pooling layer, which
comes after the convolutional layer, does down sampling and keeps the most
important details from the input pictures. This layer reduces the model's spatial
dimension, even the number of parameters, prevents overfitting, and produces a
model that is more effective. A soft-max activation function is used by the fully
connected layers (final layer) to extract high-level information from the input pictures
and classify them into various categories with labels[11].
3. TRANSFER LEARNING
Transfer learning (TL) has demonstrated to be a very smart strategy, especially in
sectors with limited data. The model can detect the specific characteristics of a certain
classification of images, like shots of the eye, considerably more quickly and often
with far less learning samples and computer resources by utilizing a feed-forward
technique to adjust the parameters in the network. back propagation is used to retrain
the weights of the top layers after the lower layers, which are already tuned to detect
the features present in photos in general, have already done so[12].
3.1. EFFICIENT NET B0 TECHNIQUE
Transfer learning is employed in the EfficientNet architecture to save time and
processing resources. The EfficientNet model comprises eight versions, spanning
from B0 to B7, where each model number corresponds to a version with additional
parameters and higher accuracy[13]. Its accuracy values are greater than those of
other well-known models as a result[11]. The incredibly effective fundamental
compound scaling algorithms form the foundation of the EfficientNet Models, as seen
in Fig. 2. This technique enables you to modify a baseline CNN to any resource
constraints while maintaining model efficacy, making it helpful for transfer learning
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
285
datasets. In terms of accuracy and effectiveness, EfficientNet models typically surpass
current CNNs like AlexNet, ImageNet, GoogleNet, and MobileNetV2.[14].
By scaling the baseline network EfficientNet B0 utilizing the same compound model
scaling technique as EfficientNet B0, they also produced EfficientNets B1-B7. As a
consequence, eight different version of CNN architectures and outcomes are shown
using the ImageNet dataset. A 600x600 image can be fed into EfficientNet B7, which
has 66 million parameters, whereas a 224x224 image may be input into EfficientNet
B0, which has 5.3 million parameters.
CNNs may capture richer and more complex features or characteristics by
increasing network depth. The vanishing gradient problem, on the other hand, makes
network training more difficult. By increasing the network's width, more fine grained
characteristics may be captured. Training is also simple. Networks of various sizes
and depths However, they are unable to capture higher characteristics. Finally, high
level resolution pictures enable CNN to detect finer patterns. Bigger pictures need
more memory and computing power[15].
ConvNets are frequently scaled up to improve accuracy. For example, by adding
more layers to ResNet, although it is possible to scale up ConvNets from ResNet-
18 to ResNet200, the process has never been fully understood, and there are
presently a number of approaches to achieve so. The most common approach is to
make ConvNets deeper or wider[4]. Scaling up models depending on their picture
resolution is a different, less frequent, but quickly gaining popularity method[9][16].
Figure 2. Model Scaling.
3.2. RESNET50 TECHNIQUE
Residual Network is referred to as ResNet, as seen in the Fig. 3. Over time, DL
convolutional neural networks have improved picture categorization and identification
in a variety of ways. By using deeper network to solve more challenging issues and
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
286
datasets. In terms of accuracy and effectiveness, EfficientNet models typically surpass
current CNNs like AlexNet, ImageNet, GoogleNet, and MobileNetV2.[14].
By scaling the baseline network EfficientNet B0 utilizing the same compound model
scaling technique as EfficientNet B0, they also produced EfficientNets B1-B7. As a
consequence, eight different version of CNN architectures and outcomes are shown
using the ImageNet dataset. A 600x600 image can be fed into EfficientNet B7, which
has 66 million parameters, whereas a 224x224 image may be input into EfficientNet
B0, which has 5.3 million parameters.
CNNs may capture richer and more complex features or characteristics by
increasing network depth. The vanishing gradient problem, on the other hand, makes
network training more difficult. By increasing the network's width, more fine grained
characteristics may be captured. Training is also simple. Networks of various sizes
and depths However, they are unable to capture higher characteristics. Finally, high
level resolution pictures enable CNN to detect finer patterns. Bigger pictures need
more memory and computing power[15].
ConvNets are frequently scaled up to improve accuracy. For example, by adding
more layers to ResNet, although it is possible to scale up ConvNets from ResNet-
18 to ResNet200, the process has never been fully understood, and there are
presently a number of approaches to achieve so. The most common approach is to
make ConvNets deeper or wider[4]. Scaling up models depending on their picture
resolution is a different, less frequent, but quickly gaining popularity method[9][16].
Figure 2. Model Scaling.
3.2. RESNET50 TECHNIQUE
Residual Network is referred to as ResNet, as seen in the Fig. 3. Over time, DL
convolutional neural networks have improved picture categorization and identification
in a variety of ways. By using deeper network to solve more challenging issues and
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
improve classification or identification accuracy is getting more and more popular[17].
Deeper neural network training has proven challenging due to problems like the
degradation problem and the vanishing gradient problem. The goal of residual
process is to resolve both of these problems.
Figure 3. Residual Neural Network.
Each layer tries to learn low or high level properties from images. The method tries
to learn some residual in residual process rather than trying to learn more complex
features.[8].
In order to overcome these difficulties, residual neural networks (ResNet) include a
"Residual block," which includes a "skip or shortcut connection," which transfers the
output from the previous layer to the layer ahead, as shown in Fig. 4. If the
dimensions of x and F(x) below are not the same, inputs x is multiplied by a
corresponding weights W to balance the dimensions of the output layer and the short-
cut link [18].
Figure 4. Residual learning: a building block.
4. RASPBERRY PI 3 SYSTEM
Using the Linux operating system, the Raspberry Pi is a tiny computer board that
may be connected to a display, keyboard, and mouse. The Raspberry Pi may be used
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
287
for electrical structures and network programming. It can also be used as a pc by
installing the Apache Webserver and MySQL on the board.[19].
As seen in Fig. 5, the Raspberry Pi 3 (RPI3) module is a low-cost Linux-based
small computer. It contains 40 GPIO Pins for managing output components like LEDs,
motors, and relays. This section has containing RPI3 hardware specifications [20]:
SoC: BCM2837
CPU: quad-core 1.2 GHz, type: ARM, Cortex A53
GPU: 400 MHz
Ram: SDRAM 1 GB LPDDR2-900
Four USB Port • 10/100 Mbps Ethernet.
802.11n Wireless LAN and Bluetooth 4.0
Figure 5. Raspberry Pi 3.
5. METHODOLOGY
Fig.6 below shows a block diagram of the whole system design
Figure 6. Block Diagram of the whole system.
Image Acquisition: The dataset that has been utilized in this project consists of
1000 images (x-ray) these images are divided into 2 classes each class has 500 x-
rays.
Image preprocessing: Two types of preprocessing have been used in our model
1. Image resize: all x-ray images have been resized to 224 widths and 224 heights.
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
288
for electrical structures and network programming. It can also be used as a pc by
installing the Apache Webserver and MySQL on the board.[19].
As seen in Fig. 5, the Raspberry Pi 3 (RPI3) module is a low-cost Linux-based
small computer. It contains 40 GPIO Pins for managing output components like LEDs,
motors, and relays. This section has containing RPI3 hardware specifications [20]:
SoC: BCM2837
CPU: quad-core 1.2 GHz, type: ARM, Cortex A53
GPU: 400 MHz
Ram: SDRAM 1 GB LPDDR2-900
Four USB Port 10/100 Mbps Ethernet.
802.11n Wireless LAN and Bluetooth 4.0
Figure 5. Raspberry Pi 3.
5. METHODOLOGY
Fig.6 below shows a block diagram of the whole system design
Figure 6. Block Diagram of the whole system.
Image Acquisition: The dataset that has been utilized in this project consists of
1000 images (x-ray) these images are divided into 2 classes each class has 500 x-
rays.
Image preprocessing: Two types of preprocessing have been used in our model
1. Image resize: all x-ray images have been resized to 224 widths and 224 heights.
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
2.
Data Augmentation: Data augmentation has been used to reduce overfitting during
CNN training and generate more images from the original image. The augmentation
processes applied to training datasets are explained below.
The rotation range is 10 which rotates all training images by 10 degrees.
The width shift range is 0.2 which increases the width by 2.
height shift range is 0.2 which increases height by 2.
The zoom range is 0.2 which zooms in the image by 0.2.
The proposed CNN models using a pre-trained network (Efficient Net B0 and
ResNet50) with fine-tuning with data Augmentation to classify covid19 disease. After
image preprocessing, a dataset has been divided into 80% for training and 20% for
validation.
In transfer learning, the convolution and pooling layers have been stopped and
replaced the fully connected layers of the (Efficient Net B0 and ResNet50) with the 2
FC layers. The 2 FC layers contain 512 neurons and 256 neurons respectively and
train the network with 80 epochs. The proposed network uses a 32 Batch size and is
trained to utilize Adam optimizer with a 1e-4 learning rate. Loss function
(categorical_crossentropy) has been used to determine a loss function. The final layer
is the output layer with a soft-max activation function, this layer consists of 2 neurons
according to the covid19 and normal case. implementation on raspberry pi3
After completing the training of the proposed network and raspberry pi 3 OS has
been installed. The saved model has been uploaded to raspberry pi and uses Thonny
python IDE to write code x-ray images to classify x-ray COVID-19 diseases. Fig.7
below shows all hardware components that have been used to design the
classification system.
Figure 7. block diagram of the whole system.
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
289
After implementation on raspberry pi3 stage need to test the whole system as
following
first step read two images diagnosed as covid-19 and the other normal case
resize the image to became 224*224 as in the model
load the model that save on raspberry pi3
use prediction function
use smtp library
enter the email and password of the sender
enter the email of the doctor how will receive the result and make the decision
run the systemIf the image was diagnosed as covid-19 the system return (0)
Else the result will be (1)
In the same time the doctor will receive the email with the result (normal or
covid-19).
6. RESULTS AND DISCUSSION
In the results and discussion part shows the result of the testing the (EffienentNet
and ResNet50) networks expressed in figure. (8). The Confusion matrix of the
EffienentNet B0 without augmentation, in this cases the model predicted 197 correctly
from 200 sample. Figure(9), represent confusion matrix of the EffienentNet B0 with
augmentation only 188 samples classified correctly.
Figure 8. Confusion matrix of the EffienentNet B0 without Augmentation.
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
290
After implementation on raspberry pi3 stage need to test the whole system as
following
first step read two images diagnosed as covid-19 and the other normal case
resize the image to became 224*224 as in the model
load the model that save on raspberry pi3
use prediction function
use smtp library
enter the email and password of the sender
enter the email of the doctor how will receive the result and make the decision
run the systemIf the image was diagnosed as covid-19 the system return (0)
Else the result will be (1)
In the same time the doctor will receive the email with the result (normal or
covid-19).
6. RESULTS AND DISCUSSION
In the results and discussion part shows the result of the testing the (EffienentNet
and ResNet50) networks expressed in figure. (8). The Confusion matrix of the
EffienentNet B0 without augmentation, in this cases the model predicted 197 correctly
from 200 sample. Figure(9), represent confusion matrix of the EffienentNet B0 with
augmentation only 188 samples classified correctly.
Figure 8. Confusion matrix of the EffienentNet B0 without Augmentation.
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
Figure 9. Confusion matrix of the EffienentNet B0 with Augmentation.
Figure (10), represent the Confusion matrix of the ResNet 50 without
augmentation, in this cases the model predicted 195 correctly from 200 samples.
Figure( 11) represent Confusion matrix of the ResNet 50 with augmentation only 192
samples classified correctly.
Figure 10. Confusion matrix of the ResNet50 without Augmentation.
Figure 11 Confusion matrix of the ResNet50 with Augmentation.
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
291
After the model has been complete training and implemented on raspberry pi 3.
The received email from the raspberry pi 3 after testing two images (covid-19 and
normal ), Fig. 12 represents the covid-19 case and Fig. 13 represent the normal one.
7. CONCLUSIONS
In this work, an automatic system for detecting COVID-19 has been constructed
successfully to recognize covid-19 case and normal case from x-ray images. For
classification, we are successfully used the deep learning methods specially the CNN
network with transfer learning like (EffienentNet B0, ResNet50). The obtained result
presented by EffienentNet B0 without augmentation is 98.5% for testing accuracy.
After the model has been implemented successfully on raspberry pi 3,which give the
ability for raspberry pi 3 to distinguish the covid-19 case from normal case from x- ray
image. Finally, raspberry pi 3 send email to the doctor represent the situation of the
patient.
8. ACKNOWLEDGMENT
We would like to thank Causal Productions for permits to use and revise the
template provided by Causal Productions. Original version of this template was
provided by courtesy of Causal Productions (www.causalproductions.com).
REFERENCES
(1) Z. Xu et al. (2020). Pathological findings of COVID-19 associated with acute
respiratory distress syndrome. Lancet Respir. Med., 8(4), 420-422. https://
doi.org/10.1016/S2213-2600(20)30076-X
(2) T. Ozturk, M. Talo, E. A. Yildirim, U. B. Baloglu, O. Yildirim, and U. Rajendra
Acharya. (2020). Automated detection of COVID-19 cases using deep neural
networks with X-ray images. Comput. Biol. Med., 121(April), 103792. https://
doi.org/10.1016/j.compbiomed.2020.103792
Figure 13. Normal case
Figure 12. COVID-19 case
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
292
After the model has been complete training and implemented on raspberry pi 3.
The received email from the raspberry pi 3 after testing two images (covid-19 and
normal ), Fig. 12 represents the covid-19 case and Fig. 13 represent the normal one.
7. CONCLUSIONS
In this work, an automatic system for detecting COVID-19 has been constructed
successfully to recognize covid-19 case and normal case from x-ray images. For
classification, we are successfully used the deep learning methods specially the CNN
network with transfer learning like (EffienentNet B0, ResNet50). The obtained result
presented by EffienentNet B0 without augmentation is 98.5% for testing accuracy.
After the model has been implemented successfully on raspberry pi 3,which give the
ability for raspberry pi 3 to distinguish the covid-19 case from normal case from x- ray
image. Finally, raspberry pi 3 send email to the doctor represent the situation of the
patient.
8. ACKNOWLEDGMENT
We would like to thank Causal Productions for permits to use and revise the
template provided by Causal Productions. Original version of this template was
provided by courtesy of Causal Productions (www.causalproductions.com).
REFERENCES
(1) Z. Xu et al. (2020). Pathological findings of COVID-19 associated with acute
respiratory distress syndrome. Lancet Respir. Med., 8(4), 420-422. https://
doi.org/10.1016/S2213-2600(20)30076-X
(2) T. Ozturk, M. Talo, E. A. Yildirim, U. B. Baloglu, O. Yildirim, and U. Rajendra
Acharya. (2020). Automated detection of COVID-19 cases using deep neural
networks with X-ray images. Comput. Biol. Med., 121(April), 103792. https://
doi.org/10.1016/j.compbiomed.2020.103792
Figure 13. Normal case
Figure 12. COVID-19 case
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
(3) V. Shah, R. Keniya, A. Shridharani, M. Punjabi, J. Shah, and N. Mehendale.
(2021). Diagnosis of COVID-19 using CT scan images and deep learning
techniques. Emerg. Radiol., 28(3), 497-505. https://doi.org/10.1007/
s10140-020-01886-y
(4) A. H. MARAY, O. I. Alsaif, and K. H. TANOON. (2022). Design and
Implementation of Low-Cost Medical Auditory System of Distortion
Otoacoustic Using Microcontroller. J. Eng. Sci. Technol., 17(2), 1068-1077.
(5) N. Alsharman and I. Jawarneh. (2020). GoogleNet CNN neural network
towards chest CT-coronavirus medical image classification. J. Comput. Sci.
,
16(5), 620-625. https://doi.org/10.3844/JCSSP.2020.620.625
(6) H. Maghdid, A. T. Asaad, K. Z. G. Ghafoor, A. S. Sadiq, S. Mirjalili, and M. K. K.
Khan. (2021). Diagnosing COVID-19 pneumonia from x-ray and CT images
using deep learning and transfer learning algorithms. Proc. SPIE 11734,
Multimodal Image Exploitation and Learning, 117340E, https://doi.org/
10.1117/12.2588672
(7) S. Vaid, R. Kalantar, and M. Bhandari. (2020). Deep learning COVID-19
detection bias: accuracy through artificial intelligence. Int. Orthop., 44(8),
1539-1542. https://doi.org/10.1007/s00264-020-04609-7
(8) A. Sai Bharadwaj Reddy and D. Sujitha Juliet. (2019). Transfer learning with
RESNET-50 for malaria cell-image classification. Proc. 2019 IEEE Int. Conf.
Commun. Signal Process. ICCSP 2019, 945-949. https://doi.org/10.1109/
ICCSP.2019.8697909
(9) M. Tan and Q. V. Le. (2019). EfficientNet: Rethinking model scaling for
convolutional neural networks. 36th Int. Conf. Mach. Learn. ICML 2019,
2019(June), 10691-10700.
(10) I. A. Saleh, O. I. Alsaif, and M. A. Yahya. (2020). Optimal distributed decision
in wireless sensor network using gray wolf optimization. IAES Int. J. Artif.
Intell., 9(4), 646-654. https://doi.org/10.11591/ijai.v9.i4.pp646-654
(11) E. A. Mohammed and H. A. Ahmed. (2022).
Raspberry pi Based Osteoarthritis
Disease classification. 7(2), 3738-3745.
(12) D. S. Kermany et al. (2018). Identifying Medical Diagnoses and Treatable
Diseases by Image-Based Deep Learning. Cell, 172(5), 1122-1131.E9. https://
doi.org/10.1016/j.cell.2018.02.010
(13) F. Wu, et al. (2020). A new coronavirus associated with human respiratory
disease in China. Nature, 579(7798), 265-269. https://doi.org/10.1038/
s41586-020-2008-3
(14) G. Marques, D. Agarwal, and I. de la Torre Díez. (2020). Automated medical
diagnosis of COVID-19 through EfficientNet convolutional neural network.
Appl. Soft Comput. J., 96, 106691. https://doi.org/10.1016/j.asoc.2020.106691
(15) K. Ali, Z. A. Shaikh, A. A. Khan, and A. A. Laghari. (2022). Multiclass skin
cancer classification using EfficientNets - a first step towards preventing
skin cancer. Neurosci. Informatics, 2(4), 100034. https://doi.org/10.1016/
j.neuri.2021.100034
(16) S. Q. Alhashmi, K. H. Thanoon, and O. I. Alsaif. (2020). A Proposed Face
Recognition based on Hybrid Algorithm for Features Extraction. Proc. 6th
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
293
Int. Eng. Conf. Sustainable Technol. Dev. IEC, 232-236. https://doi.org/10.1109/
IEC49899.2020.9122911
(17) M. A. Yahya et al., Inventions Transmit Diversity Technique
(18) Q. A. Al-Haija and A. Adebanjo. (2020). Breast cancer diagnosis in
histopathological images using ResNet-50 convolutional neural network.
IEMTRONICS 2020 - Int. IOT, Electron. Mechatronics Conf. Proc., 50. https://
doi.org/10.1109/IEMTRONICS51293.2020.9216455
(19) N. A. Hussein. (2017). International Conference on Computer and
Applications, ICCA 2017. 2017 Int. Conf. Comput. Appl. ICCA 2017, 395-399.
(20) M. H. Gauswami and K. R. Trivedi. (2018). Implementation of machine
learning for gender detection using CNN on raspberry Pi platform. Proc.
2nd Int. Conf. Inven. Syst. Control. ICISC, 608-613. https://doi.org/10.1109/
ICISC.2018.8398872
https://doi.org/10.17993/3ctecno.2023.v12n1e43.282-294
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed.43 | Iss.12 | N.1 January - March 2023
294