Abstract
Peer review is routine among physicians, nurses, and pharmacy staff yet is uncommon in the field of nuclear medicine technology. Although not a requirement of regulatory agencies, nuclear medicine technical peer review can greatly enhance the quality of patient care in both hospital and outpatient settings. To date, detailed methods for accomplishing this task have not been published. Methods: 19,688 nuclear medicine studies performed at a single institution over a 5-y period were critically reviewed. Major findings (errors with potential to change physician interpretation of the study or resulting in prescription error) and minor findings (errors without an adverse effect on study outcome or interpretation) were identified and tabulated monthly according to finding type, study type, and individual staff member. Results: The technical peer review method used at our institution provided a comprehensive means to measure the rate and types of errors. Over time, this system tracked the performance of nuclear medicine staff and students, providing feedback that led to a measurable reduction in errors. Conclusion: We present a technical peer review system based on our own experience that can be adapted by other nuclear medicine facilities to fit their needs.
Medical peer review has been a longstanding process for practitioners, dating back to the 19th century (1). Peer review is also common in other clinical disciplines, such as nursing and pharmacy. The U.S. Congress enacted the Medicare Improvements for Patients and Providers Act of 2008 (2), which sets requirements for providers of advanced diagnostic imaging. These include a mandate for accreditation, effective January 1, 2012, which carries implications for reimbursement. Many regulatory agencies base their assessments of medical staff in part on ongoing performance-based evaluations that include peer review (3). Currently, such agencies as the Joint Commission, American College of Radiology, Accreditation Council for Graduate Medical Education, and Intersocietal Accreditation Commission have set standards for purposes of accreditation, certification, licensing, credentialing, or privileging of medical and technical staff. Furthermore, the Society of Nuclear Medicine and Molecular Imaging Technologist Section has published the “Nuclear Medicine Technologist Scope of Practice and Performance Standards” (4), and the Intersocietal Accreditation Commission has published standards for technical quality review (5). However, performance evaluation for nuclear medicine (NM) technology through a formal peer review process has yet to be addressed.
The American College of Radiology has developed a peer review scoring system for radiologists, entitled RADPEER, in which a qualified radiologist scores the original interpretation using a scale from 1 to 4: 1 denotes “concur with interpretation”; 2, “difficult diagnosis, not ordinarily expected to be made”; 3, “diagnosis should be made most of the time”; and 4, “diagnosis should be made almost every time, misinterpretation of findings” (6). Presently, no such scoring system for comprehensive NM technical peer review has been reported.
We present here our methods and outcomes using a more simplified grading scale: minor and major findings, acceptable and unacceptable studies. Results were then reviewed to identify any trends and to monitor the performance improvement of student technologists and newly hired employees, as well as to provide ongoing and constructive feedback to all technical staff members. We performed an extensive, meticulous review of all NM studies performed, in part because our institution serves as a phase II site for the Nuclear Medicine Technologist Training Program, Medical Education and Training Campus, Fort Sam Houston, Texas.
MATERIALS AND METHODS
A retrospective review of quality assessment data collected as part of an ongoing NM technical peer review process over the 5-y period January 1, 2012, through December 31, 2016, was performed and tabulated. In total, 19,688 NM studies were included in this review. Each study was critically appraised for errors and deficiencies in specific categories by a senior NM technologist assigned to this purpose. Findings were grouped into the general categories of patient information (patient identification, study orders, other administrative errors), radiopharmaceutical (prescription error, misadministration), and imaging (subcategorized here into planar/SPECT and PET/CT), as shown in Tables 1–4, and classified as major or minor. Major findings included those errors that had the potential to change physician interpretation of the study or resulted in prescription error, whereas minor findings were errors without an adverse effect on study outcome or interpretation.
Patient Order, Information, and Administrative Errors
Radiopharmacy and Prescription Errors
Image Errors on Planar and SPECT Studies
Image Errors on PET/CT Studies
The technical peer review report is compiled at the end of each month from the daily data collected by the NM technologist assigned to perform the quality control review. The peer review format examines every study and determines whether that study met the criteria for acceptability, based on the number of major and minor findings. A study with no major findings or fewer than 4 minor findings was a technically acceptable study; a study with a major finding or with 4 or more minor findings (4 minor findings equalling a major finding) was a technically unacceptable study. Findings were further tabulated for each study type and for each individual NM staff member using an anonymous code number known only to that individual and to the supervisory technologist. Frequency of findings per month or per year, expressed as a percentage, was calculated by dividing the total number of findings associated with each NM staff member by the total number of studies in which that staff member participated; some findings may be attributed to more than one staff member, potentially increasing the error rate per individual.
RESULTS
The 19,688 NM studies over the 5-y period were reviewed and are summarized in Table 5. The findings were further tabulated monthly according to finding type, study type, and individual staff member. The goal for the number of unacceptable cases was set at less than or equal to 5% of cases reviewed per month and per year. For the 12 mo of the 2016 review process, 3,710 studies were reviewed, with 92.5% of the studies judged acceptable and 7.5% unacceptable, not meeting the criteria of less than 5% (Table 6).
Total Findings
2016 Results
The number and types of individual findings were tabulated for each study to identify the most common errors. Those that occurred in numbers large enough to be considered a trend were errors of omission or inattention to detail due to lack of appropriate documentation on images or forms. These errors were not study-specific but rather were the same type of error regardless of study type. Examples include incorrect study labels, patient information, acquisitions, processing, or formatting of screen saves and missing images.
On the other hand, there were several frequent findings that were mainly study-specific. These include, in PET/CT scans, not performing the acquisition at 60 ± 10 min after injection or entering the injection time or dose incorrectly into the SUV program; in bone scans, starting the blood flow study too early or too late, failing to acquire one or more required images, or acquiring the incorrect time/frame, total time, or total counts; and, in lung scans, omitting the “right” and “left” labels on the images.
Many errors were identified and corrected immediately on discovery, before completion of the study, and had no adverse impact (e.g., pharmacy label corrected, site reimaged, or study reprocessed); these findings were nevertheless recorded for peer review purposes only.
Among the pharmacy group, the most common findings were a missing pharmacy label or a label on which the date of birth or identification number was incorrect or the patient’s name was misspelled. There were no radiopharmaceutical misadministrations, unexpected adverse reactions, or reportable events.
DISCUSSION
Preventable medical errors carry a heavy price in both human lives and dollars (7). The practice of NM technology involves numerous critical steps to achieve optimal results; therefore, the potential is great for error—from inconsequential to life-threatening—which may occur at any time after the patient first enters the department until the study is presented for final physician interpretation. Regularly scheduled reviews by a qualified medical physicist are useful for proper license maintenance, and these provide feedback and guidance to medical and technical staff but focus mostly on regulatory compliance, documentation, and equipment performance rather than the day-to-day actions of individual NM staff members.
As in any profession, error rate measurement alone does not improve performance; feedback and retraining must be ongoing for an improved outcome. This is best illustrated in the aviation industry, where small errors can yield disastrous outcomes yet are extremely rare because of rigorous review and retraining programs (8). In medical imaging, a real-time comment-enhanced program for radiologist peer review has been reported to demonstrate measurable improvement in radiologist compliance (9). Similar results were observed in our experience tabulated here, in which most NM staff members showed noticeable improvement (Table 7). For the 13 staff members with at least 2 y of data, all had a decrease in error rate, from a mean of 21.9% (SD, 12.1%) in their first year to 14.8% (SD, 9.0%) in their second year (P = 0.001, paired t test). NM staff were further categorized by number of years active at this institution—as new (<5 y) or senior (≥5 y). NM staff members 5 and 6, both hired in 2013, showed a large decline in percentage of findings, from greater than 30% during the first year to less than 10% after 2 y. Review of the 8 senior NM staff (NM staff members 9–16) also showed a significant change in the error rate over time (P = 0.014), from a mean of 19.8% (SD, 13.9%) in 2012 to 13.1% (SD, 10.8%), 10.7% (SD, 6.2%), 11.0% (SD, 5.8%), and 11.4% (SD, 8.2%), in 2013, 2014, 2015, and 2016, respectively.
NM Staff Members, 2012–2016
Prevention of errors is essential to performance improvement in any endeavor. Recently reported results from the Australian Radiation Incident Register demonstrate that in 85.6% of NM incidents, the primary cause was failure to comply with time-out protocols, with incorrect radiopharmaceutical being the most common error (10). In our institution, technical peer review has led to implementation of a prestudy checklist unique to each examination type, and mandatory time-out protocols are in place for all therapeutic and quality-management-program procedures.
Peer review findings should be discussed in a group setting so that lessons can be shared and specific elements of study performance can be presented as teaching points, as well as to provide an ongoing learning experience for staff. In our institution, errors are reviewed in detail with the technical staff at regularly scheduled meetings, taking care not to disclose individual staff member identities. Assessment of findings by study type allows NM staff as a group to recognize pitfalls that are study-specific, and applicable training sessions can be held with the goal of reducing those errors. Additionally, review of findings by each individual NM student and staff member can be used to privately counsel the individual and guide remedial actions, when needed, to reduce error. This can be a tool to show NM staff members exactly what types of errors have been made over the past year so they can concentrate on improving those areas in the future.
CONCLUSION
The peer review system presented here is intended as an example that can be adapted by other NM facilities. Such a system can be used to track the progress of NM students and newly employed NM staff and to provide a mechanism for quality improvement among all NM staff. Technical peer review can be time-consuming, is best performed daily or weekly if possible to avoid a burdensome backlog, and should be performed by a designated experienced NM technologist. The use of a checklist of indicators and a simple scoring system as shown here can standardize and streamline the technical peer review process, making it more efficient, time-effective, and cost-effective. Individual institutions are encouraged to learn from our experience and adapt their own technical peer review process using those elements that are best suited to their needs, with the goal of reducing error.
DISCLOSURE
The views expressed in this article are those of the authors and do not reflect the official policy of the Department of the Army, Navy, or Air Force; the Department of Defense; or the U.S. Government. No potential conflict of interest relevant to this article was reported.
Footnotes
Published online Aug. 10, 2017.
REFERENCES
- Received for publication July 5, 2017.
- Accepted for publication August 7, 2017.