Skip to main content

Main menu

  • Home
  • Content
    • Current
      • JNMT Supplement
    • Ahead of print
    • Past Issues
    • Continuing Education
    • JNMT Podcast
    • SNMMI Annual Meeting Abstracts
  • Subscriptions
    • Subscribers
    • Rates
    • Journal Claims
    • Institutional and Non-member
  • Authors
    • Submit to JNMT
    • Information for Authors
    • Assignment of Copyright
    • AQARA Requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
    • Corporate & Special Sales
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNMT
    • JNM
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine Technology
  • SNMMI
    • JNMT
    • JNM
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine Technology

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • Continuing Education
    • JNMT Podcast
    • SNMMI Annual Meeting Abstracts
  • Subscriptions
    • Subscribers
    • Rates
    • Journal Claims
    • Institutional and Non-member
  • Authors
    • Submit to JNMT
    • Information for Authors
    • Assignment of Copyright
    • AQARA Requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
    • Corporate & Special Sales
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • Watch or Listen to JNMT Podcast
  • Visit SNMMI on Facebook
  • Join SNMMI on LinkedIn
  • Follow SNMMI on Twitter
  • Subscribe to JNMT RSS feeds
Research ArticleBrief Communication

Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine

Geoffrey M. Currie
Journal of Nuclear Medicine Technology September 2019, 47 (3) 217-222; DOI: https://doi.org/10.2967/jnmt.119.232462
Geoffrey M. Currie
School of Dentistry and Health Sciences, Charles Sturt University, Wagga Wagga, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Artificial intelligence (AI) in nuclear medicine and radiology represents a significant disruptive technology. Although there has been much debate about the impact of AI on the careers of radiologists, the opportunities in nuclear medicine enhance the capability of the physician and at the same time have an impact on the responsibilities of physicists and technologists. This transformative technology requires insight into the principles and opportunities for seamless assimilation into practice without the associated displacement of human resources. This article introduces the current clinical applications of machine learning and deep learning.

  • nuclear medicine
  • artificial neural network
  • deep learning
  • convolutional neural network
  • artificial intelligence

The use of artificial intelligence (AI) in nuclear medicine and radiology has emerged over the last 50 years (e.g., auto-contouring). Typically, AI has been involved in problem solving associated with logic and reasoning. The more recent developments in deep learning (DL) have been the subject of increased research and publications in radiology and nuclear medicine journals because of new capabilities in AI-driven image segmentation and interpretation. As early as 1976, commentators and experts on AI predicted that it would bring careers in medicine to an end (1). Although Geoffrey Hinton has been widely attributed as predicting AI would put radiologists out of a job (2), his more conservative perspective predicted significant changes to health care delivery and the way medicine is practiced (3). Even though the doomsday predictions may be exaggerated, there is no denying that AI, neural networks, and DL represent the greatest disruptive technology to confront radiology and nuclear medicine since the early days of Roentgen, Becquerel, and Curie. AI is both the vehicle for transport into the next century of sustainable medical imaging and, if ignored, a potential extinction-level competitor. The key to sustainable coexistence lies in understanding and exploiting the capabilities of AI in nuclear medicine while mastering those capabilities unique to the human health professional.

AI

Precision nuclear medicine heralds an exciting era with the reengineering of clinical and research capabilities. The term AI was first used in 1955 to broadly describe the use of computer algorithms (Fig. 1) to perform tasks that are generally associated with human intelligence (e.g., learning or problem solving) (4,5). A significant factor driving the emergence of AI in radiology has been that, since 2015, visual recognition using AI has had, for the first time, a lower error rate than the human error rate (5,6). An interesting application given the heightened capabilities of AI in visual recognition is in incidental findings. The classic “gorillas in our midst” experiment on inattentional blindness (7) highlighted that humans focusing on a specific task (counting the number of times a ball was passed) in a complex scene could render the observer blind to a person in a gorilla suit walking through the middle of the scene. This was later examined in chest CT interpretation with an artifactual gorilla inserted into multiple CT slices (8). The artifact was overlooked by 83% of expert radiologists, and 60% of those were shown—using eye tracking—to have looked directly at the artifact. Although incidental findings in general nuclear medicine studies are readily identifiable, inattentional blindness may decrease detection in more complex datasets associated with SPECT, PET, and coregistered images—a role, perhaps, for AI.

FIGURE 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 1.

Hierarchy of AI.

Machine learning (ML) is a subtype of AI (Fig. 1) that uses algorithms through data analysis without being explicitly programmed (4,9). ML tends to be associated with solving problems of logic after learning from human-defined teaching cases. ML has been used widely more recently because of the emergence of improved hardware, the availability of big data or at least large datasets for training, and the fact that ML is a valuable tool for analysis of extracted features in radiomics (10). Radiomics interprets an image as data and extracts and analyses features and groups of features to predict outcomes. Some features may be apparent to visual interpretation (semantic) whereas others may be revealed only through computational extraction. Radiomics has traditionally been associated with radiologic imaging (texture and shape, among many other features) but includes molecular imaging (the various SUVs, ejection fraction, and many more). The importance of radiomic feature extraction is in identifying those image features that, either individually or in combination with other -omic features, predict an outcome. This includes identifying redundancy in the data; that is, features that have a high correlation with one another. Indeed, ML can aid in determining which of many extracted radiomic features should be used alone or in combination (Fig. 2). Specific capabilities of ML include (2,5,11,12) disease or lesion detection and classification; automated image segmentation, preanalysis, and quantitation; extraction of radiomic features from image data; image reconstruction; case triage and reporting prioritization; research and data mining; and natural language processing.

FIGURE 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 2.

Validation phase of ANN demonstrates basic structure of ML-based ANN.

Representation learning (RL) is a subtype of ML (Fig. 1) in which the algorithm does not learn from human-interpreted images (4). RL requires larger sets of training data to learn the features required to then accurately classify the images and extracted features. In many cases, if adequate training data are available, RL will perform better than ML (4). DL is, then, a subtype of RL (Fig. 3) that adds several processing layers (depth) to detect complex features in an image (4). The vehicle typically used by ML, RL, and DL is the artificial neural network (ANN). A convolutional neural network (CNN) is a type of ANN used for DL that applies a convolutional process to extract features from the image itself (Fig. 3), whereas an ANN typically has feature data as the input (Fig. 2).

FIGURE 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 3.

Basic structure of CNN, in which network extracts radiomic features, produces convolution function, pools data through kernel, and flattens pooled feature map for input into fully connected hidden layers of neural network. ReLU = rectified linear unit.

APPLICATION OF AI IN NUCLEAR MEDICINE

The emphasis on precision nuclear medicine, the emergence of radiomics, and the establishment of large patient databases (big data) demand implementation of DL processes to optimize outcomes (Fig. 4). Largely, these applications depend on a CNN; however, an ANN has numerous important applications that do not need convolution. For some data, an ANN is an excellent adjunct to traditional statistical analysis in research or clinical practice. An ANN can also be used to build theranostic decision trees, perform business analysis, and ensure quality. Although a CNN is required for automated segmentation and extraction of data from images in radiation dosimetry, an ANN may be useful in modeling radiation dosimetry in patients undergoing therapy.

FIGURE 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 4.

Schematic representation of semantic evaluation of imaging data, addition of radiomic feature extraction, and ANN analysis to produce small data and potential to integrate with big data to enhance outcomes and drive precision nuclear medicine.

The use of ANNs in nuclear medicine is not new. In 1993, a single hidden layer of 15 nodes was used with 28 input features trained on 100 ventilation–perfusion lung scans and validated against 28 new cases, with the ANN proving to be superior to experienced physicians (P = 0.039) (13). More recently, an ANN was trained on 5,685 regions, with grounded truth provided by 6 expert nuclear cardiologists, and was shown to be superior to 17-segment defect scoring in myocardial perfusion scans (14). In all cases (stress, rest, and defect regions), the ANN had a better area under the curve on receiver-operating-characteristic analysis than did the 17-segment defect score. A multicenter trial (15) recently reported the use of a deep CNN trained on 1,160 patients across 4 centers and reported a marginal, but statistically significant, improvement for DL over total perfusion defect scores, with the area under the receiver-operating-characteristic curve being superior in all 4 sites for both per-patient and per-vessel data and cumulatively for per-patient data (3.8%, P < 0.03) and per-vessel data (4.5%, P < 0.02). The highlight of the report was the integration of CNN outcomes seamlessly into a radiomic polar map display typical of standard practice, signposting the future software integration of AI (Fig. 5). In an earlier report, Betancur et al. (16) evaluated ML in predicting major cardiac events in 2,619 myocardial perfusion SPECT patients, with ML being better in predicting MACE than expert readers and automated quantitative software but less reliable in providing a timeline to MACE.

FIGURE 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 5.

Prediction of obstructive coronary artery disease with integration of DL outputs into polar maps. Image provides example of how outputs of AI might be integrated into traditional image display, in this case in form of polar maps with AI predictive data displayed in same mode. CAD = coronary artery disease; LAD = left anterior descending coronary artery; TPD = total perfusion defect. (Reprinted from (15).)

Choi et al. (17) reported the use of unsupervised DL for 18F-FDG PET to identify Alzheimer disease, with an area under the receiver-operating-characteristic curve of 0.9 for differentiating Alzheimer disease, and identification of abnormal patterns in 60% of studies classified as normal by expert visualization. DL has also been used to identify high-risk patients most likely to benefit from induction chemotherapy in nasopharyngeal carcinoma, using 18 radiomic features extracted from PET and CT, although 5-y disease-free survival rate (50.1% for high risk and 87.6% for low risk, P < 0.001) is not a measure of CNN accuracy (18). Quantitative SPECT/CT has also been combined with DL for automated volume-of-interest segmentation on CT and application to SPECT data for calculation of glomerular filtration rate (19). The manual regions differed from the automated regions by 2.8%, with a correlation of 0.96, highlighting the value of AI in automating otherwise time-consuming and potentially prohibitive manual functions (i.e., allowing SPECT to be used over planar imaging). CNN-based automatic renal segmentation on unenhanced CT was applied after 177Lu-prostate-specific membrane antigen SPECT to estimate radiation dosimetry (20). Trained against 89 manually drawn regions, the CNN was demonstrated to be fast, with comparable accuracy to humans (mean dice scores of 0.91 for right and 0.86 for left), although the CNN had some difficulties with cystic kidneys.

An important area of development in AI is pseudo-CT attenuation maps (Fig. 6). The premise is that CT-based attenuation maps in SPECT and PET are associated with not only increased patient radiation dose but also position mismatch between the emission and transmission scans (21). MRI has significant limitations in estimating an attenuation map for SPECT/MRI or PET/MRI hybrid systems (21). The method for maximum-likelihood reconstruction of activity and attenuation has been previously published but suffers from issues associated with crosstalk and noise (21). A combination of advances in time-of-flight technique and DL has seen several investigators explore the use of CNNs to overcome the limitations of maximum-likelihood reconstruction of activity and attenuation and provide accurate attenuation maps without transmission studies. Hwang et al. (21) evaluated 3 architectures of deep CNNs that combined the maximum-likelihood reconstruction of activity and attenuation–produced attenuation map with emission data and the CNN to produce an attenuation map that more closely modeled the CT-based grounded truth (lower error). The results reported reduced noise, less cross talk, and elimination of artifacts but relied on some trial and error. Later work (22) confirmed these observations in PET/MRI using a deep neural network in 100 cancer patients. In PET/MRI, Torrado-Carvajal et al. (23) integrated the Dixon method with a CNN to generate pseudo-CT for pelvic PET scans and reported less than a 2% variation from the CT-based attenuation map and nearly 7 times better error than the standard Dixon method. Similarly, Leynes et al. (24) used a deep CNN combined with zero-echo-time Dixon pseudo-CT to produce more accurate attenuation maps than traditional MRI pseudo-CT methods. Both the Dixon method and the zero-echo-time method for pseudo-CT have several limitations (25) that have been overcome with the application of deep CNN MRI–based pseudo-CT generation, which is rapid and has a reconstruction error of less than 1% (25). More recently, DL approaches have been reported to produce pseudo-CT attenuation maps from the 18F-FDG brain PET sinogram with a mean error of less than 1% against CT-corrected PET (26).

FIGURE 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 6.

Model for potentially using CNN for improved pseudo-CT attenuation correction in PET/MRI (25) or for attenuation correction of PET without CT (or MRI) (26).

DISCUSSION

ANNs are effective in evaluating the potentially large number of extracted radiomic features and identifying those that should be used alone or in combination in decision making (2). ANNs have the capability of demonstrating relationships among features and outcomes that may not be apparent in the standard combination of semantic reporting (2). Although ANNs are unlikely to make physicians and radiologists redundant, there is an opportunity to enhance patient outcome, reporting accuracy, and efficiency using ANNs (Fig. 7).

FIGURE 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 7.

Several models for integration of AI into radiology have been proposed (4), but in nuclear medicine, perhaps most appropriate model captures best of each domain.

There has been significant angst among radiologists concerning the prospect that AI might encroach on their work function (1–3), a fire fueled by social media, blogs, and other forms of discussion proposing that the end of the radiologist is near. Any serious endeavor to integrate AI into radiology or nuclear medicine must maintain human authority, and the proposed “radiologist-in-the-loop” model provides some reassurance (2). For nuclear medicine, physician expertise relates to tasks that cannot be readily automated, whereas lower-order tasks that are easily automated not only free up valuable time for higher-order tasks but also increase the value of the physician. The same argument could be made for other nuclear medicine professionals. AI stands to create efficiencies and increase the perceived value of human resources. In consideration of the tasks that are more suitable to AI automation, the bulk of discussion centers on the impact that AI will have on radiologists. It is important in nuclear medicine to look more broadly at the influence of transformative technology on the roles and responsibilities of the medical physicist and nuclear medicine technologist. The consequent understanding of the principles and applications of AI will equip nuclear medicine professionals with the capacity to assimilate AI technologies into the workplace, in a similar manner to the many advances in technology that have reshaped roles and responsibilities.

CONCLUSION

AI has penetrated the daily practice of nuclear medicine over recent decades with little disruption. The emergence of ANNs and CNN applications has seen a significant shift in the landscape, with opportunities outweighing the threat. Nonetheless, understanding of the potential applications and the principles of AI, ANNs, and DL will equip nuclear medicine professionals for ready assimilation, averting the doomsday fears permeating radiology.

DISCLOSURE

No potential conflict of interest relevant to this article was reported.

Footnotes

  • Published online Aug. 10, 2019.

REFERENCES

  1. 1.↵
    1. Maxmen JS
    . The Post-Physician Era: Medicine in the 21st Century. New York, NY: Wiley; 1976.
  2. 2.↵
    1. Liew C
    . The future of radiology augmented with artificial intelligence: a strategy for success. Eur J Radiol. 2018;102:152–156.
    OpenUrlCrossRef
  3. 3.↵
    1. Hinton G.
    Deep learning: a technology with the potential to transform health care. JAMA. 2018;320:1101–1102.
    OpenUrl
  4. 4.↵
    1. Tang A,
    2. Tam R,
    3. Cadrin-Chenevert A,
    4. et al
    . Canadian Association of Radiologists white paper on artificial intelligence in radiology, Can Assoc Radiol J. 2018;69:120–135.
    OpenUrl
  5. 5.↵
    1. McBee MP,
    2. Awan OA,
    3. Colucci AT,
    4. et al
    . Deep learning in radiology. Acad Radiol. 2018;25:1472–1480.
    OpenUrl
  6. 6.↵
    1. Langlotz C,
    2. Allen B,
    3. Erickson B,
    4. et al
    . A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology. 2019;291:781–791.
    OpenUrl
  7. 7.↵
    1. Simons DJ,
    2. Chabris CF
    . Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception. 1999;28:1059–1074.
    OpenUrlCrossRefPubMed
  8. 8.↵
    1. Drew T,
    2. Vo M,
    3. Wolfe J
    . The invisible gorilla strikes again: sustained inattentional blindness in expert observers. Psychol Sci. 2013;24:1848–1853.
    OpenUrlCrossRefPubMed
  9. 9.↵
    1. Lundervold AS,
    2. Lundervold A
    . An overview of deep learning in medical imaging focusing on MRI. Z Med Phys. 2019;29:102–127.
    OpenUrl
  10. 10.↵
    1. Uribe C,
    2. Mathotaarachchi S,
    3. Gaudet V,
    4. et al
    . Machine learning in nuclear medicine: part 1—introduction. J Jucl Med. 2019;60:451–456.
    OpenUrl
  11. 11.↵
    1. Tajmir SH,
    2. Alkasab TK
    . Toward augmented radiologists: changes in radiology education in the era of machine learning and artificial intelligence. Acad Radiol. 2018;25:747–750.
    OpenUrl
  12. 12.↵
    1. Thrall JH,
    2. Li X,
    3. Li Q,
    4. et al
    . Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15:504–508.
    OpenUrlCrossRef
  13. 13.↵
    1. Scott JA,
    2. Palmer E
    . Neural network analysis of ventilation-perfusion lung scans. Radiology. 1993;186:661–664.
    OpenUrlCrossRefPubMed
  14. 14.↵
    1. Nakajima K,
    2. Kudo T,
    3. Nakata T,
    4. et al
    . Diagnostic accuracy of an artificial neural network compared with statistical quantitation of myocardial perfusion images: a Japanese multicenter study. Eur J Nucl Med Mol Imaging. 2017;44:2280–2289.
    OpenUrl
  15. 15.↵
    1. Betancur J,
    2. Hu LH,
    3. Commandeur F,
    4. et al
    . Deep learning analysis of upright-supine high-efficiency SPECT myocardial perfusion imaging for prediction of obstructive coronary artery disease: a multicenter trial. J Nucl Med. 2019;60:664–670.
    OpenUrlAbstract/FREE Full Text
  16. 16.↵
    1. Betancur J,
    2. Otaki Y,
    3. Motwani M,
    4. et al
    . Prognostic value of combined clinical and myocardial perfusion imaging data using machine learning. JACC Cardiovasc Imaging. 2018;11:1000–1009.
    OpenUrlAbstract/FREE Full Text
  17. 17.↵
    1. Choi H,
    2. Ha S,
    3. Kang H,
    4. Lee H,
    5. Lee DS
    ; Alzheimer’s Disease Neuroimaging Initiative. Deep learning only by normal brain PET identify unheralded brain anomalies. EBioMedicine. 2019;43:447–453.
    OpenUrl
  18. 18.↵
    1. Peng H,
    2. Dong D,
    3. Fang MJ,
    4. et al
    . Prognostic value of deep learning PET/CT-based radiomics: potential role for future individual induction chemotherapy in advanced nasopharyngeal carcinoma. Clin Cancer Res. 2019;25:4271–4279.
    OpenUrlAbstract/FREE Full Text
  19. 19.↵
    1. Park J,
    2. Bae S,
    3. Seo S,
    4. et al
    . Measurement of glomerular filtration rate using quantitative SPECT/CT and deep-learning-based kidney segmentation. Sci Rep. 2019;9:4223.
    OpenUrl
  20. 20.↵
    1. Jackson P,
    2. Hardcastle N,
    3. Dawe N,
    4. Kron T,
    5. Hofman MS,
    6. Hicks RJ
    . Deep learning renal segmentation for fully automated radiation dose estimation in unsealed source therapy. Front Oncol. 2018;8:215.
    OpenUrl
  21. 21.↵
    1. Hwang D,
    2. Kim KY,
    3. Kang SK,
    4. et al
    . Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning. J Nucl Med. 2018;59:1624–1629.
    OpenUrlAbstract/FREE Full Text
  22. 22.↵
    1. Hwang D,
    2. Kang SK,
    3. Kim KY,
    4. et al
    . Generation of PET attenuation map for whole-body time-of-flight 18F-FDG PET/MRI using a deep neural network trained with simultaneously reconstructed activity and attenuation maps. J Nucl Med. January 25, 2019 [Epub ahead of print].
  23. 23.↵
    1. Torrado-Carvajal A,
    2. Vera-Olmos J,
    3. Izquierdo-Garcia D,
    4. et al
    . Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction. J Nucl Med. 2019;60:429–435.
    OpenUrlAbstract/FREE Full Text
  24. 24.↵
    1. Leynes A,
    2. Yang J,
    3. Wiesinger F,
    4. et al
    . Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59:852–858.
    OpenUrlAbstract/FREE Full Text
  25. 25.↵
    1. Liu F,
    2. Jang H,
    3. Kijowski R,
    4. Bradshaw T,
    5. McMillan AB
    . Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2017;286:676–684.
    OpenUrl
  26. 26.↵
    1. Liu F,
    2. Jang H,
    3. Kijowski R,
    4. Zhao G,
    5. Bradshaw T,
    6. McMillan AB
    . A deep learning approach for 18F-FDG PET attenuation correction. EJNMMI Phys. 2018;5:24.
    OpenUrl
  • Received for publication June 13, 2019.
  • Accepted for publication July 16, 2019.
PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine Technology: 47 (3)
Journal of Nuclear Medicine Technology
Vol. 47, Issue 3
September 1, 2019
  • Table of Contents
  • About the Cover
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine Technology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine
(Your Name) has sent you a message from Journal of Nuclear Medicine Technology
(Your Name) thought you would like to see the Journal of Nuclear Medicine Technology web site.
Citation Tools
Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine
Geoffrey M. Currie
Journal of Nuclear Medicine Technology Sep 2019, 47 (3) 217-222; DOI: 10.2967/jnmt.119.232462

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine
Geoffrey M. Currie
Journal of Nuclear Medicine Technology Sep 2019, 47 (3) 217-222; DOI: 10.2967/jnmt.119.232462
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • AI
    • APPLICATION OF AI IN NUCLEAR MEDICINE
    • DISCUSSION
    • CONCLUSION
    • DISCLOSURE
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • Remodeling 99mTc-Pertechnetate Thyroid Uptake: Statistical, Machine Learning, and Deep Learning Approaches
  • PET/MRI, Part 2: Technologic Principles
  • Intelligent Imaging: Developing a Machine Learning Project
  • Topical Sensor for the Assessment of PET Dose Administration: Metric Performance with an Autoinjector
  • Intelligent Imaging: Anatomy of Machine Learning and Deep Learning
  • Google Scholar

More in this TOC Section

  • Patient Motion During Cardiac PET Imaging
  • Logistics of Adopting 177Lu-Vipivotide Tetraxetan Therapy in a Community-Based Hospital Setting
  • Fitness for Purpose of Text-to-Image Generative Artificial Intelligence Image Creation in Medical Imaging
Show more Brief Communication

Similar Articles

Keywords

  • nuclear medicine
  • artificial neural network
  • deep learning
  • CONVOLUTIONAL NEURAL NETWORK
  • artificial intelligence
SNMMI

© 2025 SNMMI

Powered by HighWire