Abstract
The accuracy of clinical SPECT is highly dependent on the acquisition and processing parameters, which are selected according to the clinical task. These parameters are usually set within protocols in order to allow for standardization from one study to another and to speed up the clinical routine. Methods: In the first part of this work, tomographic images of a Jaszczak phantom were obtained on 5 different SPECT systems using 2 common clinical protocols and each system’s default acquisition and processing parameters. In the second part, tomographic images of the Jaszczak phantom were obtained using identical acquisition and processing parameters on all systems. Projection data were then transferred to other system software for reconstruction. Results: For the first part of the work, variation in image quality was seen quantitatively among the systems, even when clinical protocols with the same aim were used. The accuracy of the similar reconstruction algorithms and data transfer was determined and summarized. In the second part of this study, the performance of the SPECT systems using similar acquisition protocols and reconstruction software was determined and summarized. Conclusion: The default clinical protocols offered by the manufacturer for similar studies may be different from one another. The user should modify these protocols using phantom studies and standardize same-purpose protocols among different software programs.
The techniques of the National Electrical Manufacturers Association (NEMA) are generally used for acceptance testing of SPECT systems. The information provided by these techniques is limited to the performance of the detectors, and it is not easy to predict the performance of a system for a particular clinical efficiency. There is no standard protocol for checking the clinical efficiency of SPECT systems. Several interrelated variables are open to the user. Selection of the acquisition and processing parameters strongly affects the accuracy and quality of SPECT procedures. To prevent user errors, manufacturers offer clinical software protocols for certain examinations. In general, these protocols are used in routine work without being changed by the user and provide good standardization for patient studies.
These protocols may differ from one vendor to another and may create some inconsistency among the results for different system in a department. The software packages are not always optimally prepared or free of errors.
Many nuclear medicine departments have more than one SPECT system, and the systems may be from different manufacturers. Users may transfer images from one system to another through common formats such as DICOM (Digital Imaging and Communications in Medicine). One challenge is to find an algorithm included in the software of one system that may be used for images acquired by other systems, so that the physician can review all patient studies on a single system.
Using Jaszczak phantoms, we performed a series of image quality comparisons on 5 different SPECT systems: system 1 was the Millenium (GE Healthcare); system 2, the VariCam (Elscint); system 3, the e.cam (Siemens); system 4, the Spirit (Mediso); and system 5, the Forte (Philips). First, we compared 2 routine clinical protocols performed using the individual default acquisition and processing parameters of each system. Second, we used a standard protocol on all 5 systems and reconstructed the images on each system using an identical algorithm and filter parameters; the results included the effect on the physical performance of the detectors and some possible differences in software. Projection images obtained at each system were then transferred to the computers of other systems for reconstruction with their software using the same algorithms and processing parameters; the results included determination of any possible problems in the transfer procedure and differences in software. Image contrast and noise, in terms of root mean square (RMS) measured from transaxial images, were used for all numeric comparisons.
MATERIALS AND METHODS
The characteristics of the 5 SPECT systems are indicated in Table 1. Planar and tomographic spatial resolutions in terms of full width at half maximum (FWHM) were measured at a distance of 24 cm from the collimator face according to NEMA procedures (1). The measured pixel sizes were quite similar. Uniformity and center-of-rotation corrections were implemented before the phantom acquisitions. High-resolution collimators were used for each detector, and a similar radius of rotation (24 cm) was used for each 360° acquisition. However, because of a mechanical problem in the gantry of system 5, the radius of rotation could be adjusted only to 34 cm. A Jaszczak phantom (deluxe model; Data Spectrum Corp.) filled with 370 MBq of 99mTc was used for all contrast and noise measurements.
Systems Used in the Study
The first part of the work compared some clinical tomographic protocols. The software package of each vendor was carefully evaluated, and the most frequently used clinical protocols for each system were determined. A protocol suitable for general SPECT applications and a brain protocol were selected. Tomographic acquisitions of a Jaszczak phantom were performed using the parameters listed in Table 2. Transaxial slices were obtained using the default filters. Attenuation was corrected for each system using the Chang method with the default values for linear attenuation coefficient. The accuracy of these algorithms was also investigated by drawing profiles through the center of a uniform slice and comparing them with the ideal profiles (straight profiles). All transaxial images were transferred through DICOM to a Web-based image-processing software program, ImageJ (http:/rsb.info.nih.gov/ij/index.html), for quantitative assessments of image quality. Contrast and noise were the parameters used for this aim.
Clinical Protocol Parameters for Each System
Contrast was measured using slices that included the spheric cold inserts, and regions of interest were drawn over these and a uniform part of the slice. Contrast was calculated from the count content of these regions of interest as follows:
Noise was calculated in terms of RMS:
The second part of this work was a cross-comparison of system software programs. Tomographs of the Jaszczak phantom were acquired on each system using a standard protocol. The selected projection and reconstruction matrices for this protocol were 128 × 128 and 750 kilocounts, respectively, for each 360° acquisition, with a total of 128 views. Planar Jaszczak images first were reconstructed at the system on which they were acquired and subsequently were transferred to other systems for reconstruction with their software. Therefore, a total of 5 sets of Jaszczak planar data were acquired, each being separately reconstructed at each system. Attenuation was corrected at each system using a μ-value of 0.12 cm−1. Finally, all contrast and RMS calculations were done on the processed transaxial images using the ImageJ package.
The software programs differed mainly in the way they defined filter parameters (e.g., cycle/mm or percentage of Nyquist frequency), the value of the maximum frequency (1, 2, or 100), and the value of the cutoff frequency corresponding to 0.7 Nyquist (1.4 or 0.7). So that the same cutoff frequency could be used for all reconstructions, the Nyquist frequencies of all systems were calculated and an attempt was made to obtain the same fc (=0.7 fn, in terms of cycles/mm) using the software of each system. The transaxial images processed with the Hanning filter were transferred to ImageJ, a Fourier transform of these images was obtained, and the accuracy of the cutoff frequencies was investigated to determine whether the edge of amplitude images corresponded to those cutoff frequencies. It was possible to set up the fc frequencies exactly to 0.7 fn for all software programs. A slightly lower value was adjusted for system 3.
RESULTS
Table 2 gives the acquisition and processing parameters for both protocols. For the general SPECT protocol, similar acquisition parameters were used for systems 1, 2, 4, and 5, with the exception of the use of slightly fewer projection counts on system 3. System 2 differed from the others because of the use of smaller matrices and fewer views. Filters and parameters were found to differ among system software programs, with the exception of systems 1 and 2, which used the same software package. Planar and tomographic FWHM values are shown in Table 2. Ratios of tomographic FWHM to planar FWHM remained below 1.1, as suggested in the literature (1).
It was interesting that the attenuation correction algorithm of systems 3 and 4 included default values of 0.15 and 0.16 cm−1, respectively, for linear attenuation coefficients. These values were well confirmed by overcorrection of the transaxial uniformity images—that is, pixel counts increased from the periphery to the center of the images.
Tables 3 and 4 give the contrast and noise results for the general and brain SPECT protocols for attenuation-uncorrected and corrected images, respectively.
Contrast and RMS Results for Images Without Attenuation Correction
Contrast and RMS Results for Images with Attenuation Correction
The results for the second part of this work are summarized in Tables 5 and 6. Each row shows the contrast and RMS values measured with different system software programs. For example, the first row indicates tomographic acquisitions performed with system 1 and reconstructions obtained with the software of this system and of other systems.
Cross-Comparison Between Systems for Images Without Attenuation Correction
Cross-Comparison Between Systems for Images with Attenuation Correction
DISCUSSION
The results for the first part of this work indicate that there was a 1.78 and 3.28 times variation in contrast and RMS among systems for images acquired using the general SPECT protocol. This variation became 1.48 and 3.49 when the attenuation-corrected images were evaluated. The maximum contrast, which was found for system 4, could be attributed to the fact that system 4 had the lowest FWHM and full width at tenth maximum and a high cutoff frequency, but at a cost of high RMS. Although systems 1 and 2 used the same software, and the spatial resolution of system 2 was better than that of system 1, the lower contrast of system 2 was due to the use of a smaller projection and reconstruction matrix (64 × 64) and fewer views (120 vs. 128). The slightly higher contrast of system 1 than of systems 3 and 5 might have been due to the selected filters and their parameters. Contrast for systems 1, 3, and 4 was improved 1.16, 1.61, and 1.18 times, respectively, through the use of a noncircular orbit in comparison to a circular orbit.
Variations in contrast and RMS among systems for the brain SPECT protocol differed from those found for the general SPECT protocol: a variation of 1.91 and 1.74 times in contrast and of 2.38 and 3.77 times in RMS was obtained for the uncorrected and corrected images, respectively. Although smaller matrices and fewer views were selected for the brain protocol of system 1, the use of a 1.46 zoom factor improved contrast. This improvement was also found for system 3. The larger matrices and zoom factor selected for system 2 in the brain protocol increased contrast but at the cost of higher noise. Although system 4 used more or less the same parameters for both protocols, the smaller differences may have been attributable to the selection of fewer views for the brain SPECT protocol. The greatest differences between protocols occurred for system 5, because a mechanical problem prevented rotation of its radius to less than 34 cm.
The improvements in contrast were between 1.22 and 1.68 for general SPECT and between 1.09 and 1.35 for brain SPECT when the images were attenuation-corrected. Count additions to each pixel during the correction procedure reduced the RMS values considerably. The use of higher μ-values for systems 3 and 4 (0.15 and 0.16 cm−1, respectively) caused an excessive addition of counts, which may lead to quantitative errors.
Each row in Table 4 indicates the contrast and RMS results for transaxial images acquired by one system but processed with the software of another system. Variations in contrast and RMS between system software programs were in the range of 1.12–1.24 and 1.08–1.29, respectively, for uncorrected images. For attenuation-corrected images, these variations were 1.12–1.19 for contrast and 1.06–1.36 for RMS. Ideally, there should be no differences among systems since all images were reconstructed using filtered backprojection and the same filter (Hanning with fc of 0.7 multiplied by fn) and a constant attenuation coefficient of 0.12 cm−1. The only exception was the lower contrast of system 3, which was due to its slightly lower cutoff frequency. Although a high count statistic was used for the acquisition of standard protocols, these variations can be attributable to the fluctuation in region-of-interest readings and some differences among reconstruction algorithms.
Table 4 compares images acquired by different systems but processed using a single software program. In fact, those results exhibit the performance of the systems using each of 5 software programs. As expected, the performance of system 3 was the best, since it was manufactured with the latest technology. Although the manufacturer of systems 1 and 2 is the same, the older technology of system 1 gave poorer performance than system 2. The results for systems 2 and 4 were similar because of the similar performance of their detectors and their similar year of manufacture. The poorest results were obtained for system 5, for the reasons stated earlier. Clinical SPECT image quality was most affected by the selected acquisition and processing parameters. It was interesting to see the considerable variation in clinical protocols set up for the same clinical aims using different systems. Careless selection of some parameters immediately degraded image quality regardless of the technologic advances of the systems. Those variations still existed when a standard protocol was used instead of a clinical protocol. Differences were lower and mainly due to the physical performance of the detectors. We found less variation in software comparisons and data transfer; however, differences of up to 1.24 times were noticed for some systems.
Few studies in the literature have compared SPECT systems. In some studies, Jaszczak or clinical phantoms have been used for performance comparisons (2,3). In addition, some studies have made software comparisons with the mathematic phantoms (4–7). One of the most important comparison studies, which was performed by a task group of the American Association of Physicists in Medicine, reported a range of contrast values of 0.43–0.78 among 51 SPECT systems—a range that was comparable to our findings.
CONCLUSION
The performance check of SPECT systems using NEMA techniques gave valuable information about system designs and calibrations. Users should test the clinical protocols supplied by manufacturers and modify them according to their needs. The possible reasons for performance variations among the same-purpose protocols of different vendors should be investigated in detail, and differences should be reduced to mitigate problems with the physical performance of the detector. Software packages should also be compared, with emphasis given to filter definitions in terms of units and cutoff frequencies. The technologic advantages of a new system can be overridden by the selection of incorrect acquisition and processing parameters.
Acknowledgments
No potential conflict of interest relevant to this article was reported.
Footnotes
Published online Sep. 28, 2012.
REFERENCES
- Received for publication May 11, 2012.
- Accepted for publication June 28, 2012.