Medical Imaging using Deep Learning Models

— Deep learning has played a potential role in quality healthcare with fast automated and proper medical image analysis. In clinical applications, medical imaging is one of the most important parameters as with the help of this; experts can detect, monitor, and diagnose any kind of problems that are there in the patient's body. However, there are two things that one needs to understand; that is, the implementation of Artificial Neural Networks and Convolutional Neural Networks as well as deep learning to know about medical image analysis. It is necessary to state here that the deep learning approach is gaining attention in the medical imaging field in evaluating the presence or absence of disease in a patient. Mammography images, digital histopathology images, computerized tomography, etc. are some of the areas on which DL implementation focuses. One upon going through the paper will get to know the recent development that has occurred in this field and come up with a critical review on this aspect. The paper has demonstrated in detail modern deep learning models that are implemented in medical image analysis. There is no doubt about the promising future of the deep learning models and according to experts; the implementation of deep learning techniques has outperformed medical experts in numerous tasks. However, deep learning also has some drawbacks and challenges that are required to be addressed like limited datasets and many more. To mitigate such kinds of challenges, researchers are working on this aspect so that they can enhance healthcare by deploying AI.

I. INTRODUCTION 1 In recent years, how deep learning has become an essential part of the enhancement of artificial intelligence. Deep learning along with AI has wide applications in scientific fields such as brain circuit studies, computer vision, natural language processing, chemical structure analysis, and DNA analyses. Recently DLA has garnered the attention of medical researchers in the medical imaging field [1]. With the help of deep learning, accurate and fast medical image analyses can be done, thus, holding a great future in this field.
In the past few years, a rapid increase has been noticed in medical image services in the healthcare system. Medical image services including computer tomography, magnetic resonance angiography, endoscopy, nuclear medicine imaging, pathological test, and radiography are always in demand in the healthcare system [2]. However, these images are quite difficult to evaluate at times, and it is also a time-consuming process because of availability of the fewer radiologists.
The deep learning model has got wide appreciation from medical researchers for its promising future in the healthcare system. The deep learning model is known to be a massive multilayer network that comprises artificial neurons which can perform disease diagnosis from a huge amount of data and get to know the crucial information automatically with the help of local interconnections [3]. Deep learning has already revolutionized in fields like computer vision, speech recognition, and language processing, and now it is showing promising futures in the medical field. Deep learning models have a lot of applications in the medical industry as healthcare data is becoming digitalized [4]. Moreover, the deep learning models have already shown promising improvement in the medical field which has given hope to the researchers that patients can now safely communicate with artificial intelligence-based medical systems which will play a major role in improving patient health care [4].

A. Fundamentals of Deep Learning
In the recent past, both neural networks and deep learning have got wide attention in the medical domain as they have the potential to learn from the context. Both these techniques are used widely in different kinds of applications including smart homes, classification and prediction problems, objective recognition, and image recognition as they can get adapted to multiple data types [5]. The function of deep learning is similar to the human brain as it helps the system to filter the inputs with the help of different layers that would help in the prediction as well as classification of the data. The neural network uses the layers which are the layout filters that are there in the brain where the layers act as feedback to the adjacent layer. To achieve the precise output, the feedback cycle continues. To get proper output, the layers are assigned with weights and at the time of training to get proper output these weights are adjusted properly.
Supervised, unsupervised and semi-supervised are the three types of categories of DL techniques. Talking about supervising learning, this model is trained with the help of non-input output pair. The known value comprises an input factor and the recommended value which is known to be the supervisory signal [6]. To break the labels of the desired outcome, the method makes use of the existing labels. Supervised learning is used by classification methods, and it can be implemented in situations where identification of faces, traffic symbols, converting speech to text is there. Fig. 1. Categories of Deep Learning [5].
The in-between technique of unsupervised and supervised machine learning is known as semi-supervised learning. There are two types of value that semi-supervised learning consists of they are labeled and unlabeled values. The small amounts of data are used in conjunction with the unlabelled data to help in improving learning accuracy. In the case of unsupervised learning, it focuses on the inter-relations of the elements of the dataset and then classifies the data with the help of labels [7]. NN, Clustering, and Anomaly Detection are some of the algorithms that follow these kinds of techniques. Talking about clustering, it is a principle to figure out similar elements that there are there in a dataset. This technique has got the wide application in security domains.
To go with feature processing and extraction, the deep learning techniques utilize an Artificial Neural Network (ANN) [2]. To form a summary representation, one of the major techniques that are used for the learning mechanism is the feedback technique where each level updates its input data. Deep learning techniques add the number of layers that are needed to transfer the data. For the transformation process, a part is used which is known as Credit Assignment Path, whereas, in the case of feed-forward NN, the calculation of the depth of the credit assignment path is done using the total number of hidden layers in addition to the total number of output layers [8]. Talking about Recurrent Neural networks, one can figure out more than one signal that reverses many times in a layer, this is why depth cannot be determined in this case. The neural network is widely used for image processing and the best technique used here is CNN [9]. In this case, the feature extraction technique of CNN is automatic and is done at the time of training on the images, making deep learning a much more accurate method with image processing domains.
However, when the RNN concept is taken into account the working is similar to CNN and is widely used for language computation [1]. The RNN method is widely used in data sets comprising time series, text, financial data, audio, etc. Another important methodology that has shown a promising future in medical fields is Generative Adversarial Network. It works on the principle of the generator network as well as the discriminator [3]. The fake data produced by the generator network are differentiated by the discriminator. The two network works simultaneously to enhance the training process which results in the usage of GNA that needs a generation of images from the text. To introduce the inception block, an advanced level of deep learning, Google's Inception network is taken into account which helps in the computation of convolutions and pooling operations [10]. This helps in making automated responsibility therein image processing.

B. Research Objective
One upon reading the paper will get to know the different Deep Learning Models and their application in the medical domain, especially in medical image analysis.

C. Research Motivation
Medical image analyses are one of the important parameters in the medical domain and recent development has made MRI, Computerized Tomography scan, histopathology images essential to keep the patient records proper [11]. Traditionally, these medical images require a lot of time and examination manually and the process is dependent on the experience of their profession. To come up with a solution to the medical imaging system, machine learning researchers have performed a study [12]. However, the performance and accuracy rate of the algorithms does not show any promising result and were limited to application-specific, and it also depends on the experience of the healthcare professional obtaining features.

A. Deep Learning in Medical Imaging 1) X-Ray Imaging
Chest radiography plays a crucial role in identifying lung diseases and heart pathologies. Lung diseases such as consolidation, atelectasis, tuberculosis, pleural effusion, etc., can be detected with chest radiography [13]. As compared to other imaging methods, x-ray images are budget-friendly and less dose-effective. Besides, it also plays a major role in mass screening. [14] came up with modality-specific ensemble learning so that abnormalities in chest x-ray scans can be detected with the help of Class selective mapping of interest (CRM). The CovidGAN model was proposed by [15] on the Auxiliary Classifier Generative Adversarial Network (ACGAN) to come up with synthetic CXR images required for detecting the COVID-19 virus. To detect COVID-19 in CXR images, [16] proposed a GAN using deep transfer training.

2) Computerized tomography (CT)
To make cross-section images in the body, Computerized Tomography uses computers as well as rotary x-ray tools. This technology shows the soft tissues, blood vessels and bones in various parts of the body. Due to its top-class detection capacity, it provides a much more detailed assessment of the patient. CT is also used in the detection of pulmonary nodules which is critical in the diagnosis of lung cancer at an early stage [17]. [18] came up with an ensemble FCNet classifier based on GoogLeNet for the classification of liver lesions. In another study, the proposed DLA framework by [19] is based on supervised MSS U-Net and 3DU-Net so that kidneys can be automatically segmented as well as kidney tumors can be detected with the help of CT images.
Talking about lung nodule detection, a multidimensional Region-based Fully Convolutional Network (mRFCN) was proposed by [20]. The proposed framework achieved an accuracy of 97.91% in classification. The lung nodule detection is highly effective in detecting micro models without affecting sensitivity and accuracy rate.

3) Mammograph (MG)
Most of the women dying of cancer in the world die due to breast cancer. Breast cancer can be detected at a foremost stage with the help of the MG tool, which is very reliable. Breast diseases can be easily visualized with the help of the MG tool which is a low-dose X-Ray imaging machine. The best image minimizes the detection of tumors, and it becomes difficult to classify with the help of mammography screening [21]. Detection, segmentation, and classification are three major steps that help to analyze the breast abrasion with MG.
The research focuses on the automated classification as well as detection of tumors at a prime stage by the MG tool. Issues regarding detection and classification of breast cancer can be overruled with DLA, as seen from the past decades. Fuzzy, a completely connected layer (FFCL) architecture that analyzes the fused fuzzy rules, which is based on traditional CNN for semantic BI-RADS scoring, was developed by [23]. The BI-RADS scoring for triple and multiclass classification presents appropriate results with the FFCL framework.
The CNN-based classification of breast composition as per the ACR standard for feature extraction has been upheld by [8]. Automatic detection and classification of malignant, as well as benign abrasion, can be inspected by a CAD mechanism that is based on Faster R-CNN [22]. The digital breast tomosynthesis (DBT) image can detect the calcified abrasion as well as soft tissue; this mechanism is based on a deep CNN, which runs on AI systems that was proposed by [24]. A twelve-layer CNN, which helps to detect the Breast arterial calcification (BAC) in mammograms image and analyses the risk of coronary artery disease was proposed by [25].

4) Histopathology
Diseases like cancer of the breast, lungs, and kidneys can be detected with histopathology, which is a branch of science studying human tissue under a microscope in a sliding glass. In histopathology, staining is used for highlighting as well as visualizing a prominent part of a specific tissue. The pink color to other structures and dark purple color to the nucleus is provided by the Hematoxylin and Eosin (H&E) staining tissues. From the past century, diagnosis of cancer pathology and grading is done by the H&E stains [26]. Digital pathology replaced the present imaging approach.
Analyzing the histopathology images which includes image classification, cell segmentation, tissue segmentation, and nucleus detection provides promising results with the help of deep learning methods. The whole slide imaging (WSI) method is the current development in digital pathology for image analysis. Digitizing the glass slides concerning stained tissue sections in HD is facilitated with the WSI method. The drawbacks in analyzing the multi-gigabyte images used for developing deep learning models were reviewed by [26].

5) Endoscopy
Endoscopy is a process in which a long non-surgical solid tube is inserted directly into the body for assessing the internal organs or tissue meticulously in visual form. The respiratory tract, gastrointestinal tract, female reproductive tract, and urinary tract can be examined with the help of Endoscopy. Implementing Deep Learning techniques for analyzing Gastrointestinal Endoscopy Images is reviewed by [27]. The Wireless Capsule Endoscopy (WCE) has brought a revolution in the detection of gastrointestinal (GI) tract without any invasive inspection that is painless and direct and also used for diagnosing GI diseases (ulcer bleeding). A specific radioactive tracer that helps to visualize the molecular level activities inside the tissue is injected; this is known as Positron Emission Tomography (PET), which is a nuclear imaging tool. The advantages that deep learning offers over machine learning in the PET images were discussed by [28].
A deep learning-based structure was recommended by [29] for detecting hookworms through WCE images in which two CNN networks namely, edge extraction and classification of hookworm were combined for the detection of the hookworm. The edge extraction network facilitates the detection of tubular regions that were used for developing tubular structures, playing a major role in the identification of hookworm. The detection and prediction of the depth of the injury in an early gastric cancer (EGC) with the help of a CNN model were formulated by [30]. The process of treatment depends upon the severity of the existence of a tumor in early gastric cancer (EGC), which plays a crucial role. The author developed the VGG-16 model for segregating endoscopic images into EGC and non-EGC. The paper aims at evaluating different academic papers and publications that were reviewed to study the implementation of deep learning models in Medical Images. Medical image processing constitutes three major tasks namely classification, detection, and segmentation [8]. Deep learning models are used to evaluate images in two or more classes, in the classification method. The study of organs and tumors in medical images by Deep learning models is done in the detection process. The medical images are processed by Deep learning models that help to classify the interest area in the segmentation task.

C. Classification
Several application areas regarding image classification systems have been upgraded by Deep learning. This method helps in addressing the image classification issues that add benefits in medical image analysis. There are numerous benefits of deep learning models in the process of classification. Researchers can investigate the methods used for performing diagnosing and characterizing pulmonary nodules in CT images, which depicts the success of CNN in image classification [17]. The deep learning method is also enforced for the classification of lung nodules successfully. To classify the Interstitial Lung Disorder (IDL) patterns on CT images, [31] proposed a deep CNN framework. For this task, batch-based algorithms were used previously. Accuracy of 68.8% was obtained from the holistic image classification. The brain can be affected by Alcohol Use Disorder (AUD) or alcoholism. Neuroimaging method helps to observe the structure of the brain. A 10-layer CNN model for alcohol use disorder (AUD) issue was formulated using batch normalization, dropouts, and PReLU techniques which has been proposed by [32]. The risk of breast cancer can be identified with a breast parenchymal density indicator. The pressure on the radiologist can be minimized with the help of DL algorithms that are used in assessing the density. There is a successful implementation of breast density classification by DL. A CNN-based method that is used for predicting Visual Analogue Score (VAS) regarding breast density estimation was introduced by [33].
The detection of Autism Spectrum Disorder (ASD) in functional Magnetic Resonance Imaging (fMRI), which is based on anatomical regions of the brain, was done by [34] using deep learning. A two-stage neural network process was formulated by them. Two fully connected layers, four maxpooling layers, and 6 convolutional layers were used to train the CNN (2CC3D) in the foremost stage. A sigmoid output layer was used in the network. The researchers took the help of the anatomical structure of the brain fMRI to detect the biomarkers for ASD in the second stage. To fulfill the purpose, the researchers developed a frequency normalized sampling method. The multiple databases showed dynamic results for neurological functions, which were evaluated by them.
Inception V3 was used for categorizing the type of skin cancer in medical images and gained effective performance that can be compared to the human experts employed by [35]. A 10 layer CNN model proposed by the authors can help in achieving an accuracy of 97.71, a sensitivity of 97.73, and a specificity of 97.69. There can be a long-term disability, cognitive impairment, and neurologic dysfunction caused by cerebralmicro bleedings (CMB), which are small chronic brain hemorrhages. Due to this, it is important to identify CMBs at an early stage for effective treatment. Cerebral micro-bleedings (CMBs) can be detected by transfer learning-based DenseNet Registration as proposed by [36]. An accuracy of 97.71% has been obtained from the DenseNet based model. Detection of cancerous or non-cancerous factors of breast mammogram images was formulated by [37] using CNN. A CNN was designed with 13 layers for diabetic retinopathy diagnosis, which uses image classification, and trained it with the basics of ninety thousand fundus images which rendered appropriate results classification was proposed by [3]. D. Detection Fig. 10. CNN based architecture for Detection [38].
For analyzing medical image, the process of detection and localization of the diseased organ plays a crucial role. The detection process helps to identify the specific region of interest and highlights it by creating a bounding box around that particular region. For example, brain tumors in MRI scans. The detection task is also known as localization. Computer-Aided Detection (CAD) is another form of Medical Image Analysis that is commonly known. If there are any early signs of disease it can be confirmed with the help of the CAD system. The CAD system is mainly used for breast and lung cancer detection.
The transfer learning is considered to be more accurate as it shows variable results in the natural image than the medical image, which is discussed by [31]. The expert clarifies that the effect of transfer learning as well as CNN pre-trained architecture, in the diagnosis of interstitial lung disease and detection of enlarged thoracoabdominal lymph nodes from CT scan, provides appropriate and prominent images.
The detection of prostate cancer in breast cancer metastasis and biopsy specimens in the sentinel lymph nodes can be done with the help of CNN as mentioned by [38]. There are four convolution layers and three classification layers for the feature extraction process in CNN. In the Digital Mammography DREAM CHALLENGE, the Faster R-CNN model can be used for detecting the mammography abrasion and can also classify these abrasions into malignant and benign tumors [38].
With the help of MRI images of the OASIS dataset, a CNN model for detecting Alzheimer's disease for the anatomic region of the brain was proposed by [39]. For categorizing four classes of AD, the author had developed baseline CNN networks such as ResNet and Inception-v4. Patients with moderate, mild nondemented, and very mild issues are classified into these classes. It provided an accuracy of 33%, 62%, 99%, and 75% respectively for these classes as mentioned by the authors. The proposed method provides accurate results on user data and it is effective in generalizing ADNI dataset as claimed in this project. Fig. 11. An Axial MRI Brain Scan [39].
A thorough reinforcement learning that was used identifying anatomical landmarks in CT scan, ultrasound, and cardiac MRI was proposed by [11]. A 3D CNN model was used for detecting the cancerous lung nodules in the CT scan. A trained deep CNN model that has 13 layers for detection of mitosis in the breast cancer histology images was designed by [21]. E. Segmentation In medical image analysis as well as computer-aided diagnosis and surgery, the process of segmentation of various structures and organs plays an important role. Magnetic Resonance Imaging (MRI), ultrasound, x-ray, Positron-Emission Tomography (PET), Optical Coherence Tomography (OCT), and Computed Tomography (CT) are used for image segmentation that is based on Deep learning of Medical Image Analysis. This imaging facility helps to compare different biological structures, organ, tissue classes, and pathologies. Deep learning is commonly used in segmenting medical images and numerous articles document the progress of deep learning in this area. It was successful in segmenting the breast tissue using deep learning.
For the segmentation of aggressive Prostate cancer, GAN was used by [38]. The characteristics of deep learning models and modern CNN that are used for classification in MRI brain images were discussed and reviewed by [40]. Neural Ordinary Differential Equations (NODE) are used in the U-Net framework to obtain better segmentation results as shown in the colon gland segmentation dataset (GlaS) by [41]. The medical imaging for segmentation of neuronal structures in the electron microscopic stacks was developed by [41] as a U-Net framework. The expert also upgraded the U-Net for the detection of Caries in the dental X-rays as Radiography. This result curtained all the previous results as this mechanism helps to identify 7 different tooth structures by these radiographs.
A deep learning architecture was developed for segmenting gastric cancer which showed the benefits of using multi-skill modules and specific conversion operations was developed by [42]. Deep learning was used for analyzing skin lesion segmentation by [43]. For feature extraction purposes, the expert used an Autoencoder and Decoder network. Integrating the end-point-error and negative Log Likelihood for segmenting the melanoma structure with sharp edges helped to minimize the loss function. The SLSDeep method was evaluated on an ISBI dataset regarding skin aberration detection, which showed promising results [43]. The U-Net was used for morphometry, cell counting, and detection by [44]. Several segmentation tasks were accomplished by this U-Net framework. F. Registration Spatial alignment of images is very common in medical image analysis that can be done through image registration to a common anatomical space. This process helps to coordinate a source image with a target image by transformations. Before deep learning, image registration has received significant attention as image registration is a mainstream task in medical image analysis [2]. The neural network can get into medical image registration with the advancement of deep learning.
Using the deep learning technique to mitigate the difference of eye position for longitudinal 3D retinal images was used by [45]. In this method, tasks performed in the sequence include pre-processing for projection image, identifying the vessel shadows, and applying the enhancement filters. For this purpose, the SURF algorithm was used for feature extraction and RANSAC was used for outlier cleaning. Fig. 13. CNN-based Common Registration Pipeline [45].
A U-Net architecture-based Deep learning framework for another project was proposed by [46]. The exports used 8 ROIs from the medulla and cortex of the segmented kidney to calculate the performance of their mechanism. The normalized root-mean-square-error values (NRMSE) formula and cortex were low after registration as presented by the author at the time of free-breathing measurement.
The researchers have benefited from the deep learning solution as it is a promising image registration task where two images are aligned in the spatial domain. This process is essential for various medical image applications that perform the evaluation. The registration purpose of the measurement of similarity in MRI and CT images of the head can be done with deep autoencoders that were devised by [47]. For confirming the location and position of transesophageal, hand implants, and knee at the time of surgery showed promising registration results, when a 3D CNN-based framework was used for registering the 2D x-ray images as proposed by [48]. A 3D FCN based mechanism for registering the CT lung inspiration-expiration image pairs regarding the anatomical region of the chest was proposed by [49]. DIRLAB and CREATIS are two datasets that were used to upgrade the performance of this method [49]. The deep learning technology is highly essential for registering 2D and 3D images for the chest regions is getting approval from the Medical Imaging community.

G. Limitations and Challenges
Several issues are lowering the effects of deep learning algorithms in medical imaging which include: • Data inconsistency related to contrast, resolution, and signal-to-noise are few major issues that one encounters while using Deep Learning models in Medical Imaging [17]. The process in clinical practice raises these challenges. • Another challenge in the evaluation of medical images is the non-standardized acquisition of medical images [50]. • The efficacy of DL techniques in evaluating medical images is hindered by the need for comprehensive medical annotations • Sharing of data is an intricate process and therefore limited data is a major issue. Issues regarding sociological and technological viewpoints for medical data privacy must be discussed. • Huge annotated data is necessary for developing DLA.
Another issue is annotating the medical images [3].
Radiologists' domain knowledge is necessary for labeling medical images. It requires a considerable amount of time. In the medical sector, there are issues regarding DL technology despite its success. It cannot be confirmed to increase medical efficiency, improve patient satisfaction, and reduce medical costs with the help of DL [4]. However, it is essential to formulate guidelines regarding deep learning models used for medical imaging and also showcase the effectiveness of deep learning methods in clinical trials.

III. METHODS AND MATERIALS
This research report includes a detailed review of advanced dear methodologies suitable for medical imaging. It primarily focuses upon the recent studies of integrating modern technologies in the medical sector. These techniques are categorized based on their pattern recognition of the abilities and taxonomy of human anatomy. Evaluation of this research report clearly shows that there is an acute shortage of appropriate annotated large-scale data set for medical imaging. This is one of the fundamental challenges that are also exploiting the application of deep learning in this field. This paper also includes literature on pattern recognition, computer vision, and machine learning. It serves as a guideline to overcome the challenges of medical image analysis with deep learning techniques. It also includes a detailed review of public data sets for training deep learning algorithms for medical imaging. The insufficiency of indirect knowledge in deep learning in the medical community is also a challenge for its application. And it is essential to understand the core technical concepts of deep learning for its appropriate application. It will help in improving the efficiency and effectiveness of this technology.
Research papers on this topic talk about the advanced applications of deep learning across various domains. The list is further collected with books available in the IEEE Xplore, Elsevier, and Springer Databases. The author has created a list of 200 top search papers from Google scholar using query terms like medical image analysis and deep learning. All these research papers and books were manually shortlisted to make sure that they provide substantial and relevant information on the subject. These resources were primarily published over the last 3 years. However, few older research journals are also mentioned in this report.
The primary purpose of this research paper is to provide a generalized overview of deep learning methodologies and their application in medical imaging. The peeper focuses upon the aspects that are beneficial for clinical applications. Some of the authors of this report are practicing radiologists and surgeons. The perspective of the authors is expected in facilitating the research in this domain. It will help in overcoming the challenges of the design and implementation of deep learning techniques for patient care and medical science.

IV. RESULTS AND DISCUSSION
Deep learning has multiple applications across various sectors of the industry. Experts have gradually been exploring its potential and suitability for different applications. A large number of tools are being developed to work in coordination with deep learning algorithms [11]. Over the last decade, experts and investigators are working continuously to create new designs suitable for machine learning. Some of the popular tools based on machine learning include

A. Cafe Tool
It is a high-precision tool design for industrial purposes. It makes use of computer vision technology based on Deep learning algorithms. The researchers have used an open framework for this purpose. It offers faster running, fine-tuning, and training of models without the need for any coding [48]. The toll specializes in image processing and is compatible with python API. These are also some of the advantages of this device. However, there are challenges related to the integration of new layers, scalability, dependence on new network structures, and extension dependence.

B. Torch Tool
The design and functionality of this tool are based on scientific computing. The device is compatible with a series of machine learning algorithms. Developers have used C programming language that is compatible with all types of technologies. Some of the notable advantages of this device include flexibility, ease of coding, and compatibility with different computational skills [17]. The trainable algorithms are one of the major benefits of this tool. However, this tool is largely dependent on the layer and structure of the network, the learning process is difficult and lacks scalability. There are also some issues with the compatibility of the python API.

C. Theano Tool
The Theano tool was designed and developed by MIT back in 2008. It makes use of python. The tool includes packages like Pylearn2 and Keras that are compatible with deep learning algorithms. The architecture of this tool is similar to that of the symbolic tensor model [51]. The notable features are recursive network support, flexibility, and compatibility with high-level deep learning. The disadvantages of this device are compilation issues, very few pre-trained prototypes, and difficulty in code modification.

D. Tensorflow Tool
This is an open-source tool that is designed and developed by Google. The underlying code is dependent on Deep learning neural networks. The architecture of this tool can perform numerical computation with data flow grids. Every node in the data flow model signifies mathematical operation. Similarly, the edges represent data arrays [2]. It is a platform-independent tool that is compatible with high-quality GPUs, distributed training, fast development, and enhanced portability. Major drawbacks of this tool are lack of sufficient pre-trained prototype, need of large memory space, not compatible with dynamic convolution input operation [1].

E. Insufficient Annotated Data
It is arguably the most notable feature of deep learning that makes it a lot more different than machine learning. Deep learning algorithms are capable of modeling highly complex mathematical functions. Typically, developers employ multiple layers for developing complex models [17]. However deeper networks require more training parameters. Deep learning models with multiple parameters can only provide generalist outcomes. Continuous use of large data sets can interfere with parameter values. This process is also common in machine learning models. Complex models infant with high volumes of teeth and typically overfit the data set and show poor performance. This type of modeling is not desirable as it creates a false impression on the actual machine learning model [27]. The data distribution mechanism of the system also gets affected. The model only gets to learn peculiarities from the training data.
Using deep learning algorithms for different domains is not suitable. There is very limited training data for preparing the algorithms. In medical image analysis, there are very small volumes of data that need to be processed in an annotated manner. This approach is suitable for developing powerful deep learning models [3]. Here the lack of appropriately and theta is a major challenge that is affecting the performance of deep learning models. This research report clearly shows that this is a fundamental challenge for the development of medical imaging technology with deep learning. It is essential to overcome this issue to fully exploit the potential of deep learning technology.

F. Imbalanced Data
Another major limitation in medical imaging processes is the imbalance of data sets in comparison to computer vision tasks. The data set for the training model for the detection of specific diseases only contains a restricted number of positive samples. Due to this, training deep networks with imbalanced data becomes extremely challenging [51]. The models also become biased to a specific data set due to insufficient data. The low frequency of occurrence in positive samples affects the accuracy of medical imaging processes. It is important to balance the original data at the time of developing a large-scale data set. Therefore, additional care must be taken for deploying a deep learning model for medical imaging purposes [47].

G. Missing Confidence Interval
Literature describing the effectiveness of deep learning models referred to prediction confidence. It represents the output signal of a neuron that is integrated with the help of a single probability value. Insufficient provision for predictive value is not acceptable for medical imaging tasks [27]. It is observed that a large number of deep learning methodologies in medical imaging try to learn deep learning models for end-toend functioning. The end-to-end learning approach is the basis of deep learning. It is not always the right way to explore the capability of this technology in the medical field. This approach also affects the large-scale application and use of deep learning techniques [11].
Another non-technical challenge for the deployment of deep learning is the acceptance of the public. People are not comfortable with the fact that the yield results are given by a non-human element. The scenario is further affected by the adoption of artificial intelligence [4]. The performance of machine learning algorithms is much higher in comparison to humans in image recognition. Chances are the accuracy of results will be much higher with the application of machine learning in medical image analysis. Some research reports and reviews also talked about the application of machine learning for dermatologists and reduced the largest [3]. However, there are still some legal and moral factors that affect the application of machine learning in the medical sector. There is no denying the fact that if patients suffer morbidity or are misdiagnosed due to AI-assisted medical devices. The issue is further accentuated by the inability to clearly explain the black box of machine learning technology [12]. It is more likely that the dependence on technology will increase, and people will start accepting AIbased technologies in various walks of life.

H. Future Research Directions
The adoption of deep learning algorithms in the medical sector will facilitate substantial improvement in different clinical processes. Deep learning can automate different operations and workflows to provide fast treatment to the patients. It will also assist physicians in the automatic identification and classification of diseases to offer an accurate diagnosis. Deep learning technology will also reduce human error in the process of diagnosis and treatment [1]. Medical image analysis will also improve in terms of efficiency and accuracy. Diagnostic results obtained from deep learning-based devices will have large-scale applications over the next few years. Scientists and physicians are looking for innovative ways to provide the best possible treatment to patients with deep learning. Future research in medical image analysis will further help in the development of deep neutral architecture. Improving the network structure will make a substantial impact on medical image analysis [27]. The development of deep learning models using manual techniques requires sufficient knowledge. Only then it can replace the application of manual designs in deep learning. One of the notable features of deep learning is the development of active functions. The application of various imaging techniques plays a crucial role in the diagnosis and treatment of a patient. This feature will also be beneficial for clinical diagnosis, treatment selection, and drug development for critical patients [47]. Solving the issues like lack of inverted medical data, poor monitoring, and emerging research in deep learning are necessary for the development of medical image analysis. It is one of the fastest-growing technologies that will help in overcoming the existing challenges. It will open up new opportunities for the application of deep learning in the medical sector.

V. CONCLUSION
As modern medical data is taking the digital form, there is huge potential for the application of deep learning across different sectors of the medical field. It will eventually revolutionize technologies like speech recognition, language processing, and computer vision. Deep learning will have a significant impact on reforming the medical field. The adoption of modern technologies will facilitate more precise and quicker image processing. Medical imaging is one such sector where detailed information is necessary for taking clinical decisions. The large-scale application of deep learning models for image processing is important for classification retrieval segmentation registration and detection purposes. This research report also talks about the strategies and algorithms of deep learning. Besides giving a brief idea of deep learning algorithms in the medical field it also addresses two key objectives. First, it addresses the introduction of deep learning along with the necessary theory. Next, it talks about medical image analysis with deep learning. It involves multiple optimization techniques for obtaining appropriate results. After the application of deep learning techniques, the report also reviews methods like classification detection and segmentation of medical image applications.
As per recent studies, researchers found that deep learning can easily outperform the capabilities of medical specialists. Deep learning models that employ convolution techniques can achieve high performance across different domains. Although there are some challenges in deploying machine learning techniques like incorrect data, training samples, standardization, imbalanced data, and poor quality. There are also some quality issues in ethical considerations that affect the deployment of machine learning in medical image analysis.