Cannabis Ruderalis

Page 5
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 5
Single-Label COVID-19 Lung Disease Classification
Olga Petan †. Aleksandar Karadimche ‡. Dexter Hadley ⋆.
Email (s): olga. petan@ gmail. com, aleksandar. karadimce@ uist. edu. mk, dexter. hadley@ ucf. ed
University of Information, Science, and Technology Saint Paul the Apostle, Ohrid, 6000, North Macedonia
University of Information, Science, and Technology Saint Paul the Apostle, Ohrid, 6000, North Macedonia
.⋆ College of Medicine, Orlando, Florida, United States
I. Abstract
Chest imaging (X-ray, CT, MRI, etc.) is often used to diagnose COVID-19 infections, and the
clinical interpretation of these images may influence patient treatment plans including quarantine, ICU
admission, and ventilator support. Convolutional neural networks represent state-of-the art artificial
intelligence (AI) for image classification and require large and diverse training image datasets to achieve
clinical grade performance. The artificial intelligence community has been focusing on building CNNs to
help physicians images for COVID-19 patients, but the open community attempts have been plagued by a
few main problems that are characteristic of training AI for clinical applications:
Our work suggests that the incorporation of finer clinical features such as multi-class labeling of
outcomes may improve clinical performance. Much more diverse and properly annotated data, as well as a
proper data architecture is needed to build clinically useful AI applications. Leveraging computers and state
of the art artificial intelligence could play a crucial role in helping physicians manage the spread of COVID-
19, as well as provide better and more timely care for patients infected with COVID-19.
The experiments described in this paper investigate whether using computer vision to detect patterns
in pathological lung tissue can alleviate the pressure on healthcare systems and facilitate the management
of future pandemics. Due to a relatively small dataset, we use the VGG-16 network to do transfer learning
on our images. Our results show that Artificial Intelligence can be used to find pathology in patients and to
determine whether there is viral or bacterial pneumonia. However, AI shows challenging results when it
comes to determining specific viral or bacterial pneumonias, which is in part due to the lack of available
data.
Keywords: single-label classification, convolutional neural networks, covid-19, pneumonia, lung disease, artificial intelligence,
Microsoft Azure
Page 6
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 6
II. Introduction
It has been a couple of years since humanity found itself in the midst of a global pandemic. What
started in late December 2019 in Wuhan as an isolated incident has now resulted in almost 700 million
infections and over 7.95 million deaths1 worldwide at the time of writing. Professionals across the globe
were working on developing effective treatment and vaccine distribution, yet one issue remains–how to
stop the spread of a virus and how to manage effects of a pandemic, this one or future ones, on the
population, medical workers, essential workers, and hospitals.
The artificial intelligence field tried to help this battle too. Many researchers have written hundreds
of papers and created hundreds of deep learning models that aim at diagnosing COVID-19 from images, as
well as assessing the severity of the disease for prognostic purposes. However, building AI clinically usable
AI solutions has been far from straightforward. First, the lack of data is the most prominent obstacle [1, 2].
Second, when clinical data is made available, a lack of structured meta-data or any ontological classification
of clinical outcomes, severely limits the learning ability of deep learning algorithms and severely biases
their performance [3]. Third, a lack of detailed clinical curation such as the severity of infection precludes
the clinical applicability of DL models.
While there have almost 700 million cases globally, the AI community only has access to less than
2000 COVID-19 positive images. At the time of writing, there are half a dozen open source image
repositories available to AI researchers. For the experiments in this paper, we combined four such
repositories and ended up with 4708 X-Ray images, 1764 of which are images of COVID-19 patients. We
decided to only use X—Ray images because CT scans are costly both in terms of money and operating
skills, and because poorer regions have a greater number of COVID positive patients and deaths. Although
X-Ray images are less sensitive for diagnosis compared to CT scans, 69% compared to over 90%
respectively [4], X-Ray machines are more widespread, more radiologists know how to operate these
machines, require less time for disinfection after being used, and have lower levels of radiation.
Out of those 4708 X-Ray images, only 485 have information about patient location, and almost 87%
of the images come from Europe and Asia. The situation is similar with information about the patient’s sex
–only about 15% of the images have information about the patient’s sex, and there is twice as many male
patients’ images than females’. There is not enough information about patients’ age either, as we only have
429 age records. Out of these, there are no records for patients younger than 19. From this, we can see that
most of our images are from European and Asian men over the age of 56, and models created on these
poorly annotated images will not generalize well and will not be useful for people in the US. This is
troublesome, considering that the US has had over 110 million positive cases to date, and over 1.1 million
deaths.
1 https://www. worldometers. info/coronavirus/? utm_campaign= homeAdvegas1?
Page 7
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 7
Figure 1: Geographical distribution of labels
We created four experiments to demonstrate the need for much more data. All experiments share
the same architecture and are built using the same pre-trained model, but they differ in how the datasets are
structured. The first experiment is a binary classification deep learning model that checks whether the X-
Ray can pick up pathology on an image in the first place. The second experiment is a multiclass classification
model differentiating between COVID-19 positive X-Ray images, bacterial pneumonia, viral pneumonia,
and healthy images. The purpose of this experiment is to see whether a deep learning model built on this
data can differentiate COVID-19 from other pneumonias. The third experiment is a multiclass classification
model trying to differentiate between different pneumonias: bacterial, viral, lipoid, and fungal. The purpose
of this experiment is to see whether a deep learning model can find enough differences between different
pneumonias. Finally, the third experiment is a multi-class classification model differentiating between
different viral pneumonias: influenza, MERS, SARS, COVID-19. The purpose of this experiment is to see
whether we can really trust these types of multiclass classification models, and to discuss where the AI
community’s effort should be focused. All four experiments combined stress the need for acquiring more
data and involving physicians in the annotation and etymological processes.
The paper is structured as follows: in Section III–Rationale we discuss why we need more data
and why current available data is not useful to build deep learning models. In Section IV–Related Work,
we discuss the current deep learning models differentiating between viral pneumonia, bacterial pneumonia,
COVID-19, and healthy images. In Section V-Dataset Construction, we discuss how we constructed the
dataset we are using in the experiments and describe some metadata about it. In Section VI-Research
Methodology, we discuss the four experiments we created and the different models we built for each
experiment. And in Section VII–Conclusion, we discuss the results of our experiments and we propose
steps which should be taken to build artificial intelligence models to help physicians fight future pandemics.
III. Rationale
If our goal is to develop a clinically usable deep learning model to diagnose COVID positive
patients, we need to have a very big input dataset, because neural networks are data-hungry. Even though
Page 8
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 8
there have been about 110 million COVID-19 cases in the US to date, there are less than 2000 available X-
Ray images for the AI community to build models with. A substantial problem emerges from this lack of
data: the model memorizes instead of learning the disease’s features, which will lead to overfitting and the
model not being able to classify new data correctly.
To overcome the small number of available X-Ray images problem we use transfer learning, where
we repurpose an already trained model and apply it to initialize the weights in our COVID-10 classification
model. This, in turn, creates a new problem. ImageNet is the pre-trained model that is most often used in
COVID-19 image classification work, and it contains over 14 million images and 1000 classes. However,
these are natural images of things like animals, food, landscape, buildings. These domains differ greatly
from medical imaging, which is why we need access to a lot more data than we already have.
IV. Related Work
Because chest radiographs are relatively cheaper and require less specialized skills, X-ray imaging
is more widely used in COVID-19 imaging. Ground glass opacities, bilateral interstitial opacification, and
consolidation are the most frequent manifestation of COVID-19 in an X-ray image [5]. However, this image
modality has certain shortcomings that radiologists and physicians must be aware of: images of one patient
may vary as the disease develops; and images can be normal even when a test is positive. That is why in
severe Covid-19 there is a proportionately greater lung involvement which tends to be denser peripherally
and in the lower zones. The role of CXRs as the initial radiological assessment of patients presenting with
respiratory distress and possible Covid-19 is established [6].
Feature extraction is also done in X-Ray images. For example,[7] first use the DenseNet103 is used
for lung segmentation, and then from the segmented area, Res-Net18 is used to classify diseases using the
patches. Because of the small number of positive COVID-19 X-ray images, a transfer learning model
approach is taken, which borrows weights from pre-trained deep networks and use those to extract features
from images in the small dataset. Thus,[8] use a few pre-trained models to test their limited dataset: VGG16,
GoogleNet, ResNet18, InceptionV3, DenseNet. The features extracted from the lung segmentation are fed
into an SVM, whose goal is to further classify lung diseases.
V. Dataset Construction
We collected images from four open source repositories: Covid chestxray dataset [9], Figure 1
COVID chestxray dataset [10], Actualmed COVID chestxray dataset [11], and Kaggle’s COVID-19
Radiography Database [12]. The distribution of images from these sources is shown in Table 1.
Page 9
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 9
Table 1: Number of images per image source
From these repositories, we collected 4708 images that had 13 lung disease labels and 5 views. The
dataset also includes a column indicating whether the patient has had a PCR test, and if they did, whether it
was positive or unclear. All images are from adult patients. Figure 2 shows the breakdown of the diseases
and how many images were used in the experiments.
Figure 2: Diseases and number of images per disease
The dataset contains X-ray images from 5 views: PA, AP, AP Supine, Lateral view, and AP Erect
view. A short explanation of the different X-Ray views is given below, and Table 4 shows the number of
images per view:
PA (Posteroanterior) view : The patient faces the X-ray detector, and the X-ray beam passes
through the posterior aspect of the body to the anterior side.
AP (Anteroposterior) view : The patient faces the X-ray source, and the X-ray beam passes from
the anterior to the posterior side of the body. Used in situations where the patient cannot be positioned for
a PA view, such as trauma cases
AP Supine view: Similar to the AP view, but the patient is lying down (supine) rather than
standing. Often used when a patient cannot stand, and a horizontal position is necessary.
Lateral view: The X-ray beam passes from one side of the body to the other, with the patient
positioned sideways. These views provide additional information and a different perspective on lung
involvement and can help in assessing the extent and distribution of pulmonary abnormalities.
AP Erect view : Similar to the AP view, but the patient is in an upright or standing position.
Page 10
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 10
Table 2: Number of images per view
Figure 3 shows some examples of images for each of the views.
a) PA b) AP c) AP Supine
c) Lateral d) AP Erect
Figure 3: Examples of images per view
We have identified 3 issues with data imbalances in the available image sources: view imbalances
where not all diseases have enough views for the model to be trained on; sex imbalances where the metadata
for the images is not tagged with the patient’s sex or one of the sexes has many more images; geographical
imbalances with Europe having the most images and Asia trailing second, but not many images for the rest
of the continents; and age imbalances where the average age per continent is over 50. Below we show the
distributions of the metadata for images and patients.
The view imbalance issue becomes apparent when we see the image views we have for each disease.
For example, we have images for mostly all views for Bacterial and Viral pneumonia, but we only have
images for 2 views for COVID-19. Table 3 shows the number of views we have for each disease.
Page 11
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 11
Table 3: Number of images per view and disease
The sex imbalance is shown in Table 4. We can see that we do not know the sex of the patients for
most of the images. When we do know the sex, the only labels we know it for are Bacterial and Viral
pneumonia, and there is a bias towards male patients.
Table 4: Number of images per sex and disease
The geographical imbalance is shown in Table 5. We can see that we have very little images tagged
with sex or location, and mostly have images from men from Europe and Asia.
Table 5: Number of images per sex and continent
From Table 6, we can further see that we mostly have age information for patients from Europe and
Asia and that Europeans on average are slightly older.
Page 12
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 12
Table 6: number of images per continent and information about patients’ age per continent
From Table 7, we can see that most of the metadata we on age and location is for viral pneumonia,
and the healthy and COVID-19 images are not tagged.
Table 7: number of images per continent and disease
VI.
Research Methodology
In computer vision, some of the most common parameters used when training a model to predict
outcomes are:
iteration -the number of times the entire dataset is processed by the model during training;
epoch -a single pass through the dataset during training. During each epoch, the weights of the
model are adjusted, which helps with the model accuracy and performance;
batch size -the number of images that the model sees in one iteration. Since the number is usually
small, the weights for the model can be updates faster and more frequently;
learning rate -a parameter used to optimize the model and its weights.
Page 13
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 13
All models in our experiments are trained on the ImageNet database using the VGG-16 model. The
architecture of the VGG-16 is shown on Figure 4.
Figure 4: The layers in the VGG-16 architecture
An example of an image passing through the VGG-16 architecture is shown on Figure 4. The models
were trained during 2 iterations, each consisting of 100 epochs with a batch size of 16 and a learning rate of
0.001. The dataset is split on a training, testing, and a validating set consisting of 70%, 20%, and 10%
images respectively. The models do not see the testing and validating sets during training.
Figure 5 shows the basic layers of a VGG-16 architecture: the convolution, pooling, and dense
layers. The convolutional layers are used to extract features by employ filters that scan an image. By doing
this, the layers learn where the patterns and edges are. The pooling layers reduce the dimensions of the
filters that the convolutional layers produced. Because most of the matrix that results from the pooling will
be sparse, max pooling is used to keep only the information that is important for the classification. The
dense layer is then used to further reduce this information to the outputs that we want to classify.
Figure 5: an image passed through the VGG-16 architecture
When we input an X-ray image to this network, what the input layer sees is a matrix with values for
each pixel. Since our images are grayscale, the pixel values will be between 0 (white) and 255 (black). The
input layer first gets the pixel values and the dimensions for this matrix (for example, 256x256). Then a
convolution layer goes over this initial pixel matrix with a smaller, 3x3 matrix, and finds edges, textures,
and early patterns. The first pooling layer reduces the dimensionality of the resulting matrix. This process
is three more times and each time more distinctive patterns are picked up, such as the patterns for a specific
disease. At last, a softmax layer is used to turn the network’s output to a probability score that the classifier
uses to determine whether there is a diseases and which one it may be.
Page 14
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 14
VI. Results and Discussion
The results for all 4 experiments are described below. Figure 6 shows sample X-ray images for a
few labels: healthy (No Finding), bacterial pneumonia, viral pneumonia, and COVID-19.
a) Healthy b) Bacterial Pneumonia c) Viral Pneumonia d) COVID-19
Figure 6: image examples for some of the diseases in the dataset: a) Healthy, b) Bacterial Pneumonia, c) Viral
Pneumonia, d) COVID-19
VI. 1-Experiment 1
For the first experiment, because we have many more pathological images than healthy, we did two
versions of the experiment. We recoded the labels and we created 2 outcomes: a normal image (No Finding),
or an image with pathology (Pneumonia). Figure 7 shows that the model from the first version of the
experiment achieved an accuracy of 49%, an F1-score of 0.55, a sensitivity of 0.99, and a specificity of
0.27, shown on Table 8. The low accuracy is due to the imbalanced dataset-we have about twice as many
pneumonia images as healthy.
Figure 7: Binary model: No Finding, Pneumonia Confusion Matrix, Imbalanced Dataset
Page 15
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 15
Table 8: Per class metrics for the binary model
When we remove this imbalance in the second version of this experiment, the accuracy improves
to 94.28%, as shown on Figure 8.
Figure 8: Binary model: No Finding, Pneumonia Confusion Matrix, Balanced Dataset
VI. 2-Experiment 2
The second experiment is a multiclass classification model differentiating between COVID-19
positive images, viral and bacterial pneumonia, and healthy images. This model achieved an accuracy of
84.17%, an F1-score of 84, as shown on Figure 9, a sensitivity of 0.0,, and a specificity of 1.0. Because we
only have about 72 X-Ray images for bacterial pneumonia, the model does not focus on this outcome. This
can also be noted in the ROC graph in Figure 10. The first experiment shows an ability to classify COVID-
19 X-Ray images, however, it cannot easily classify viral pneumonia outcomes. We can see that the model
hesitates between viral pneumonia and COVID-19.
Page 16
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 16
Figure 9: Multi-classifier model: Bacterial Pneumonia, COVID-19, No Finding, Viral Pneumonia
Issues with the Bacterial Pneumonia label can be seen on the ROC curve for this model, shown on
Figure 10. The ROC curve shows what would happen to the true positive rate (the actual label and the
predicted label match) and the false positive rate. For example, we could choose the threshold to be 0.5, 0.7,
etc. If we remove the bacterial pneumonia outcome and fix the data imbalance, the model performs better-
it achieves an accuracy of 85.76%, a sensitivity of 0.0, and a specificity of 1.0, as shown on Table 9. This
version of the first experiment shows the ability to differentiate COVID-19 from viral pneumonia, as shown
on Figure 11.
Figure 10: ROC Curve shows issues for Bacterial Pneumonia
Page 17
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 17
Figure 11: Multi-classifier model: COVID-19, No Finding, Viral Pneumonia
Table 9: Per class metrics for multi-classifier model
VI. 3-Experiment 3
For the third experiment, we recoded the labels again and created 4 outcomes: bacterial, fungal,
lipoid, and viral pneumonia. Figure 12 shows the confusion matrix and the accuracies for this experiment.
Even though the model showed a very high accuracy of 96.63% and an F1-score of 0.97, Table 10 shows
a sensitivity of 1.0 and the specificity is unknown. This and the confusion matrix show that the model
overfits for viral pneumonia severely. This is due to the dataset being imbalanced and the
underrepresentation of bacterial, fungal, and lipoid pneumonia.
Page 18
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 18
Figure 12: Multi-classifier model: Bacterial, Fungal, Lipoid, and Viral Pneumonia
Table 10: Per class metrics for multi-classifier model
VI. 4-Experiment 4
The fourth experiment is similar to the first experiment in terms of data imbalances, so we created
two versions for the fourth experiment as well.
For the first version, we recoded the labels and created 6 outcomes: COVID-19, influenza, MERS,
SARS, varicella, and viral pneumonia. The confusion matrix is shown on Figure 13 and the accuracies are
shown on Table 11. Even though the model showed a very high accuracy of 97.79%, an F1-score of 0.98,
a sensitivity of 1.0, and a specificity of 0.0 shown on Table 11, because the dataset is highly imbalanced,
the model only sees 2 outcomes: COVID-19 and viral pneumonia. Nevertheless, the model is able to
differentiate between COVID-19 and viral pneumonia.
Page 19
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 19
Figure 13: Multi-classifier model: COVID-19, Influenza, MERS, SARS, Varicella, Viral Pneumonia
Table 11: Per class metrics for multi-classifier model
To remove this data imbalance, we created a second version of this experiment, recoded the labels,
and created 2 outcomes: COVID-19 and other pneumonia. The confusion matrix and the accuracies are
shown on Figure 14. This version of the experiment performs just as same, and demonstrates that if an
outcome is underrepresented in the dataset, the deep learning model will simply ignore it.
Page 20
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 20
Figure 14: Multi-classifier model: COVID-19, Other Pneumonia
VII-Conclusion
The results of the four experiments we carried out demonstrate three serious issues: the first one is
the restricted access to X-ray images that have been labeled correctly; the second one is the highly
imbalanced distribution of the patients whose images we have access to; and the third one are the challenges
that AI runs into when it is applied to low quality data or imbalanced. While deep learning can be used to
discover pathology on X-ray images, this seems to have limited usefulness for the medical field, especially
during a pandemic. However, once the models are asked to differentiate between diseases that may appear
similarly in an X-ray, the models struggle to achieve accurate predictions. Without a sufficiently large
dataset, the artificial intelligence solutions we build may not be usable in a clinical setting because of lack
of covariates, like race, ethnicity, age, socio-economic status, presence or absence of symptoms, etc.
The current experiment framework is a solid base for our future research by building a set of
hierarchical methods that classify different levels of pathology from X-ray images. In our next work, we are
going to adjust the topology of the labels and investigate these hierarchical methods further, so we can
further understand what kind of obstacles computer vision runs into when it miss-classifies lung pathology
and remove them so we can alleviate potential health crises in the future.
References
1. Hasan MK, Alam MA, Dahal L, Elahi MTE, Roy S, Wahid SR, et al., 2020. Challenges of Deep
Learning Methods for COVID-19 Detection Using Public Datasets, medRxiv
2. Tartaglione E, Barbano CA, Berzovini C, Calandri M, Grangetto M., 2020. Unveiling COVID-19
from CHEST X-Ray with Deep Learning: A Hurdles Race with Small Data. Int J Environ Res
Public Health.
3. Burlacu A, Crisan-Dabija R, Popa IV, Artene B, Birzu V, Pricop M, et al., 2020. Curbing the AI-
induced enthusiasm in diagnosing COVID-19 on chest X-Rays: the present and the near-future.
medRxiv
4. Sohail S., 2020. Radiology of COVID-19-Imaging the pulmonary damage. J Pak Med Assoc.
5. Wong HYF, Lam HYS, Fong AH, Leung ST, Chin TW, Lo CSY, et al., 2020. Frequency and
Distribution of Chest Radiographic Findings in COVID-19 Positive Patients. doi:
10.1148/radiol. 2020201160
Page 21
© Journal of UIST, ISSN 2955-182X
Vol. 4 (No. 1): 2024
Journal of UIST Volume 4, Number 1 21
6. Rubin GD, Ryerson CJ, Haramati LB, Sverzellati N, Kanne JP, Raoof S, et al., 2020. The Role of
Chest Imaging in Patient Management during the COVID-19 Pandemic: A Multinational Consensus
Statement from the Fleischner Society., doi: 10.1016/j. chest. 2020.04. 003.
7. Lai CC, Shih TP, Ko WC, Tang HJ, Hsueh PR., 2020. Severe acute respiratory syndrome
coronavirus 2 (SARS-CoV-2) and coronavirus disease-2019 (COVID-19): The epidemic and the
challenges., doi: 10.1016/j. ijantimicag
8. Sethy PK, Behera SK. Detection of Coronavirus Disease (COVID-19) Based on Deep Features.
2020, doi: 10.20944/preprints202003. 0300. v1
9. COVID-19 Image Data Collection: Prospective Predictions Are the Future, Joseph Paul Cohen and
Paul Morrison and Lan Dao and Karsten Roth and Tim Q Duong and Marzyeh Ghassemi,
arXiv: 2006.11988, https://github. com/ieee8023/covid-chestxray-dataset, 2020
10. COVID-19 Chest X-Ray Dataset Initiative, Wang L., Wong A., Qui Lin Zh., McInnis P., Chung
A., Gunraj H., Lee J., Ross M., VanBerlo B., Ebadi A., Git KA., AL-Haimi A.,
https://github. com/agchung/Figure1-COVID-chestxray-dataset
11. Actualmed COVID-19 Chest X-Ray Dataset Initiative, Wang L., Wong A., Qui Lin Zh., McInnis P.,
Chung A., Gunraj H., Lee J., Ross M., VanBerlo B., Ebadi A., Git KA., AL-Haimi A.,
https://github. com/agchung/Actualmed-COVID-chestxray-dataset
12. MEH Chowdhury, T. Rahman, A. Khandakar, R. Mazhar, MA Kadir, ZB Mahbub, KR Islam,
MS Khan, A. Iqbal, N. Al-Emadi, MBI Reaz, MT Islam,“Can AI help in screening Viral and
COVID-19 pneumonia?” IEEE Access, Vol. 8, 2020, pp. 132665-132676,
https://www. kaggle. com/tawsifurrahman/covid19-radiography-database
13. Shibly KH, Dey SK, Islam MT, Rahman MM. COVID faster R-CNN: A novel framework to
Diagnose Novel Coronavirus Disease (COVID-19) in X-Ray images., doi:
10.1016/j. imu. 2020.100405
14. Busby LP, Courtier JL, Glastonbury CM, 2018. Bias in radiology: the how and why of misses and
misinterpretations. Radiographics., doi: 10.1148/rg. 2018170107
15. Farhat H, Sakr GE, Kilany R., 2020. Deep learning applications in pulmonary medical imaging:
recent updates and insights on COVID-19. doi: 10.1007/s00138-020-01101-5
16. Mangal A, Kalia S, Rajgopal H, Rangarajan K, Namboodiri V, Banerjee S, et al., 2020. CovidAID:
COVID-19 Detection Using Chest X-Ray., doi: 10.48550/arXiv. 2004.09803
17. Stogiannos N, Fotopoulos D, Woznitza N, Malamateniou C., 2020. COVID-19 in the radiology
department: What radiographers need to know., doi: 10.1016/j. radi. 2020.05. 012
18. Heinrichs B, Eickhoff SB., 2020. Your evidence? Machine learning algorithms for medical
diagnosis and prediction., doi: 10.1002/hbm. 24886
19. Mobiny A, Cicalese P, Zare S, Yuan P, Abavisan M, Wu C, et al., 2020. Radiologist-Level COVID-
19 Detection Using CT Scans with Detail-Oriented Capsule Networks., doi:
10.48550. arXiv. 2004.07407
20. Tahir AM, Qiblawey Y, Khandakar A, Rahman T, Khurshid U, Musharavati F, Islam MT, Kiranyaz
S, Al-Maadeed S, Chowdhury MEH, 2021. Learning for Reliable Classification of COVID-19,
MERS, and SARS from Chest X-Ray Images, doi: 10.1007/s12559-021-09955-1
21. Pooch, E., Ballester P., Barrow R., 2021. Can we trust deep learning models diagnosis? The impact
of domain shift in chest radiograph classification, doi: 10.48550/arXiv. 1909.01940

Leave a Reply