Abstract
Objective:Nuclear medicine technologists work under significant radiation protection constraints. These constraints are based on the linear no-threshold (LNT) radiation paradigm, which was developed in the 1960s and was based largely on the deleterious effects of radiation as they were understood at the time. More recently, the theory of radiation hormesis, or a beneficial effect of low-level exposure to radiation, has gained recognition. This article reviews the history of attitudes toward radiation, describes the radiation hormesis hypothesis, examines some of the evidence that supports it, and suggests ways that radiation protection regulations might change if the hypothesis were to become accepted.
Nuclear medicine is governed by regulations that are based on an understanding of the harmful effects of radiation that was current in the early 1960s. However, the question of whether low levels of radiation can cause harm remains controversial. Several recent studies suggest that radiation exposure, under some circumstances, can be beneficial. This new theory of radiation hormesis is especially pertinent to nuclear medicine technologists, who are exposed in the low-dose, low dose-rate fashion that seems to carry beneficial effects.
This article will introduce radiation hormesis to the nuclear medicine technologist community by first reviewing the history of our perceptions of radiation and then contrasting the current radiation paradigm with a radiation hormesis paradigm. Epidemiological and experimental evidence supporting the radiation hormesis paradigm will be reviewed. Finally, several ways will be suggested in which the practice of nuclear medicine and the utilization of radioactive materials in general might be made easier by the acceptance of the radiation hormesis hypothesis. Two specific points will emerge from the information presented here: that public perception drives the enactment of regulation as much as does science and that, although the science of radiation biology has advanced, regulations affecting occupational radiation exposure have not.
HISTORY OF PERCEPTIONS ABOUT RADIATION
Radiation has always been present in our environment. However, mankind was not directly aware of its existence until the end of the 19th century, when a flurry of scientific discoveries were made. In 1895, Wilhelm Roentgen discovered x-rays. In 1896, Henri Becquerel discovered the spontaneous emission of radiation from uranium, a phenomenon he called “radioactivity.” And, in 1898, Marie Curie discovered radium, which is luminescent as well as having radioactive isotopes (1).
Beyond the revolution they caused in basic physics, these discoveries were put to immediate practical use. The first diagnostic x-ray was produced in January 1896, only a few months after Roentgen made his discovery. More than 1,000 articles on x-rays were published that year. The field of nuclear medicine had its origins in 1923, when Georg de Hevesy proposed using radioactive tracers in biomedical research. Today, the field of diagnostic radiology with its various modalities affects the great majority of people in the developed world.
But the new rays were put to use in more mundane tools as well (2). Thomas Edison introduced a home fluoroscopy unit in 1896. In the 1920s, x-ray units were used in beauty parlors to remove unwanted facial and body hair. Even into the 1950s, fluoroscopes were used to measure the fit of children’s shoes.
The ability of radiation to treat disease was also explored soon after its discovery. The first recorded radiation treatment was performed in 1896. Radiation’s benefits for treatment of malignancy are well documented and widely used. More recently, the use of radionuclides to treat certain illnesses, both malignant and nonmalignant, has become established. Radiation also has been used at various times to treat several nonmalignant conditions, including ankylosing spondylitis, postpartum breast tenderness, and scalp ringworm (3). It was even used for a brief period in efforts to increase fertility (3). These uses have been discontinued.
Various forms of radium were put to more esoteric uses (2). The Radium Eclipse Sprayer purported to be a combination insecticide and furniture polish. Radithor tonic contained 0.076 MBq (2 μCi) radium per bottle and was touted for its health benefits. Radium-containing beverages were called “liquid sunshine cocktails.” One could even play “radium roulette” (the wheel, balls, and chips were painted with radium) or purchase a glowing radium-painted crucifix.
Radium also produced some of the early harmful consequences of radiation. In the 1920s, women who painted watch dials with radium were diagnosed with osteosarcomas and osteonecrosis. Even before that, radiation injuries had been reported. Becquerel left a vial of radioactive material in his coat breast pocket, resulting in erythema and skin ulceration, the first recorded radiation injury. The first known casualty resulting from radiation exposure was Clarence Dally, Thomas Edison’s assistant. Dally was involved in Edison’s early work with x-rays, all of which was performed without benefit of shielding. Dally’s death was horrific: his flesh became ulcerated, his hair fell out in clumps, and he underwent multiple amputations of both arms in an effort to stem the progression of radiation damage.
The event that truly caught the public’s attention, however, was the dropping of 2 atomic bombs in World War II. The combination of immediate devastation, acute radiation injuries, and increased leukemia incidence among survivors cemented in the general consciousness the idea that radiation is harmful. Further experiments, such as the Megamouse genetic work at Oak Ridge National Laboratory in the 1950s, added to our understanding of radiation’s risks. More recent incidents, such as the Chernobyl explosion, have only reinforced these perceptions.
In the same time frame, scientific experiments using tissue culture illuminated the effects of radiation on a cellular level. Irradiation of cells was found to cause DNA damage, division delay (longer than normal time to the next division cycle), reproductive failure (inability to maintain cell division over a long period of time), and interphase death (cell death before the next cell cycle). These experiments solidified the understanding (widely held at that time) that radiation is harmful.
TWO RADIATION PARADIGMS
The bureaucratic response to this knowledge of radiation’s dangers was embodied in the linear no-threshold (LNT) hypothesis. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) in 1958 proposed 3 relationships of risk-versus-radiation dose (Fig. 1). Of the 3 relationships, the linear model is the most conservative, because it predicts the greatest amount of harm at low levels of radiation. In 1960, the International Commission on Radiation Protection (ICRP) and its U.S. counterpart, the National Council on Radiation Protection and Measurements (NCRP), made an operational decision to accept this model as the basis for recommendations for radiation exposure limits. Because it was presumed to afford the greatest protection, the LNT hypothesis became the basis of the accepted model, or paradigm, for radiation’s ability to cause harm.
A paradigm is an interlocking set of assumptions about the operation of a complex system (4). It is a model of how the system works. Once accepted by the scientific community, a paradigm tends to channel attention and research funding into “acceptable” directions. Observations that fail to fit the paradigm may be ignored or suppressed. This is not a conspiracy but, instead, a reflection of human nature. When we believe something to be true, we discount alternative statements that contradict the “truth” as we perceive it. In general, a paradigm must be conclusively disproved before a new paradigm can be accepted.
The LNT hypothesis assumes that the passage of a single charged particle through a single cell could cause damage to DNA that could lead to a genetic defect or a cancer. Thus the radiation paradigm based on the LNT hypothesis has the following tenets:
Radiation exposure is harmful;
Radiation exposure is harmful at all exposure levels;
Each increment of exposure adds to the overall risk; and
The rate of accumulation of radiation exposure has no bearing on risk.
These tenets reflect the mechanistic assumption that each particle of ionizing radiation (α, β, or γ) can cause a DNA mutation, which, in turn, can potentially lead to a cancer.
Further, the proportional relationship of radiation exposure to DNA damage to fatal illness allows the same principles to be applied on a population basis. The risk to a population in a radiation incident is directly proportional to the aggregate radiation exposure of that population. This implies that individually trivial doses can add up to detectable population health effects and that such a risk estimate is valid even if actual exposures vary widely from individual to individual. For example, the risk to a population of 1 million people, all exposed to 0.01 mSv (1 mrem) (a trivial dose according to all regulatory bodies) is equivalent to the combined-population risk of 10 people exposed to 1 Sv (100 rem) (a dose capable of causing significant hematologic effects) and the other 999,990 receiving no exposure.
In the last decade, an alternative hypothesis called radiation hormesis has gained adherents. The term “hormesis” describes any physiologic effect that occurs at low doses of a substance and cannot be anticipated by extrapolation from the substance’s toxic effects at high doses. Some everyday examples of hormesis include the effects of vitamins, trace elements, and hormones (5). In each instance, a small amount of the substance is beneficial but a large amount is toxic. Similarly, radiation hormesis proposes that low levels of radiation exposure produce health benefits. The radiation hormesis hypothesis can be shown graphically as a decrease in the radiation risk at low levels (Fig. 2).
The basis for the new paradigm of radiation hormesis is the fact that the most important possession of a living cell is its DNA, which must be protected above all else. DNA is subject to many stresses, of which radiation is only one. Thus, it makes sense that cells have developed mechanisms to safeguard their DNA and that these mechanisms are effective against radiation damage. A number of protective mechanisms are available to cells and are discussed here.
Free Radical Scavengers.
One of the main stresses put on a cell is the creation of free radicals and reactive oxygen species. These are created by natural metabolic processes and by various chemical agents (6) as well as by radiation interactions. They cause damage by removing electrons or hydrogen atoms from cellular molecules. As a first line of defense, a cell can protect itself from this damage by producing radioprotector molecules, such as glutathione. These molecules, called free radical scavengers, can repair the damage caused by reactive oxygen species by donating a hydrogen atom to the affected molecule.
DNA Repair Enzymes.
If this initial protective action fails, DNA damage may have occurred. Repair must be effected correctly and efficiently for the cell to remain viable. This, too, is a common occurrence. It is estimated that 10,000 genetic modifications occur spontaneously each hour in our bodies (7). A group of enzymes, with names such as phosphatase, polymerase, endonuclease, and glycosylase, are able to “inspect” and repair damage to the DNA.
Division Delay and Cell Death.
Finally, a cell can also safeguard its DNA in the long-term by not passing on damaged chromosomes to its progeny. Thus, division delay after radiation exposure can be seen as a chance to “scrutinize” the DNA before undergoing a division. Reproductive death and interphase death likewise can be viewed as preventing badly damaged DNA from being passed on to daughter cells. Interphase death, in particular, is an interesting phenomenon that resembles apoptosis, or programmed cell death (as opposed to necrotic cell death). Apoptosis occurs spontaneously in healthy tissues when a cell has reached the end of its life span or in embryos when a selected cell is no longer needed. When apoptosis occurs outside of these specific circumstances, it appears that cells have a way to destroy themselves once their DNA is significantly damaged (8). Apoptosis has been shown to occur in virally infected cells and in cells made ischemic by an acute thrombolytic stroke or coronary artery blockage.
The main argument against the current radiation paradigm as it is applied in the regulations is that it does not take these mechanisms into account. In extrapolating from high- to low-dose patterns of exposure, it neglects the fact that organisms have always been exposed to low doses of radiation and have found ways to deal with these doses. The radiation hormesis paradigm, on the other hand, holds that these mechanisms are always at work and that they are effective against low doses of radiation as well as other stresses. Furthermore, these mechanisms are stimulated by ongoing low-level radiation exposure and thus lead to overall improved health. Only when they are overwhelmed by high doses of radiation do they break down, with resulting demonstrable harm to cells and organisms.
In humans, the result of the cellular protective mechanisms against radiation, according to the radiation hormesis hypothesis, is a decrease in the death rate, specifically in death from cancer. Because the protective and reparative mechanisms are working at a higher level than they would in the absence of radiation exposure, mutations (both those caused by radiation and those that occur spontaneously) are found and repaired or destroyed. Thus, the whole-organism response to low-level radiation exposure in the radiation hormesis paradigm is a state of improved health when compared with that in the absence of radiation exposure. This leads to the decrease in the death rate at low radiation levels as shown in Figure 2.
In a sense, the issue is one of interpretation. In the highly charged atmosphere of the 1950s, all effects of radiation were seen as evidence of radiation’s dangers. The radiation hormesis paradigm takes these same effects and puts them in a positive light. In other words, it views division delay, reproductive death, and interphase death not as harmful effects but as evidence of the ability of a cell to deal with radiation.
Our scientific training reminds us that no hypothesis stands without evidence. The radiation hormesis hypothesis is supported by both epidemiological and experimental evidence.
EVIDENCE SUPPORTING RADIATION HORMESIS
Epidemiological Studies
One of the first places that we find supporting evidence for radiation hormesis is in the data on atomic bomb survivors. Figure 3 shows leukemia incidence as a function of radiation dose. Note that the data points actually decrease below the natural incidence of leukemia in the low-dose range. Many scientists have tried to write off this part of the graph as a statistical artifact, but it certainly resembles the proposed hormetic effect of radiation illustrated in Figure 2.
Two studies have looked at death rates in areas of high natural background radiation. One study in China compared an area with average radiation exposure of 2.31 mSv/y (231 mrem/y) with a similar area with only 0.96 mSv/y (96 mrem/y) average exposure (9). The cancer mortality rate was lower in the high-background group, but this difference was statistically significant only in the 40- to 70-y age group (i.e., those who had the greatest lifelong exposure to high background levels of radiation). A study in India showed an inverse correlation between background radiation levels and cancer incidence and mortality (10).
Because there is less attenuation of cosmic radiation at high altitudes, a region at high altitude can be studied as a high-background area. One such study used 2 regions: a low-altitude region (<300 m [1,000 ft]; 825,000 inhabitants) and a high-altitude region (>900 m [3,000 ft]; 350,000 inhabitants) (11). The cancer death rate was lower in the high-altitude group. This study was controlled for industrialization, urbanization, and ethnicity but not for smoking or diet, which may limit its value.
Two studies have considered the effect of occupational radiation exposure among nuclear industry workers. One in Canada found that nuclear industry workers had cancer mortality that was 58% of the national average (12). In the same study, nonnuclear power industry workers had cancer mortality equal to 97% of the national average, thus discrediting in this case the “healthy worker effect” that often is a problem in epidemiology. Matanoski et al. (13) reported on 700,000 U.S. shipyard workers, including 108,000 nuclear shipyard workers. The 29,000 nuclear shipyard workers with the highest cumulative doses (>5 mSv) had a 24% lower death rate (from all causes) than 33,000 nonnuclear workers, a group with a death rate equal to that of the general population.
The study that caught the attention of the health physics community was published by Bernard Cohen of the University of Pittsburgh (14). Cohen studied the relationship between home radon levels and lung cancer rates, with the idea that he could prove or disprove the LNT hypothesis at low radiation doses. Radon is a daughter in the 238U decay chain. Because its natural state is gaseous, it can be inhaled into the lungs. If it decays there, it becomes a heavy metal that becomes stuck in the airways and emits α-particles. Thus, increased home radon levels are expected to result in an increased incidence of lung cancer. Cohen deliberately chose this study because it fits the population averaging used by the LNT paradigm: in large populations exposed to low but measurable amounts of radon, an increase in cancer rates should be detectable.
Cohen’s study compared lung cancer death rates with radon levels in 1,600 counties in the United States. He found that the death rate from lung cancer decreased 7% for each additional 0.027 Bq/L (pCi/L) of radon in the air. No one believed this at first, including Cohen. He reanalyzed his data to correct for migration patterns, smoking, and 54 other socioeconomic variables. He continued to find a negative relationship between radon levels and lung cancer death rates.
Cohen’s study was highly controversial in the health physics community, because it so completely contradicted the LNT paradigm. A major criticism of the study was that it assumed that average exposure determines average risk, whereas most epidemiological studies relate individual exposures to individual risks. Cohen’s response was that the tenets of the LNT paradigm allow population averaging and, therefore, his study design should be a valid test of the hypothesis. The results of Cohen’s analysis continue to spark discussion and debate.
Experimental Studies
In addition to epidemiological studies, we should also examine experimental data before we endorse the idea of radiation hormesis. Actual experimental evidence of the repair of radiation damage has been available since 1960, when Elkind and Sutton-Gilbert (15) demonstrated the phenomenon of sublethal damage and its repair. A lethal dose of radiation, when split into 2 portions and separated by 2 h or more, produces much less cell killing than the same dose administered at one time.
More recent studies have sought to demonstrate a hormetic effect of radiation. In one experiment, lymphocytes irradiated with 1.5 Gy (150 rad) demonstrated 30%–40% chromosome breakage. When preirradiated with 0.01–0.03 Gy (1–3 rad), followed by 1.5 Gy (150 rad), the chromosome breakage dropped to 15%–20% (16). Another study demonstrated not only a decrease in mutations when cells were preirradiated compared with nonpreirradiated cells, but also showed that the kinds of mutations in preirradiated cells were qualitatively different than in cells not preirradiated (17). A third study showed that a single low dose of radiation (0.001 Gy [0.1 rad]) reduced the probability that a cell would undergo neoplastic transformation (18).
Radiation exposure can be shown to activate cellular protection and repair mechanisms. Feinendegen et al. (19) demonstrated that the glutathione levels in cells increase for about 5 h after a radiation exposure. In the same time period, DNA synthesis is inhibited. Early enzymatic repair of DNA damage is roughly doubled in cells irradiated with 0.25 Gy (25 rad) followed by 2 Gy (200 rad) compared with cells irradiated only with 2 Gy (20).
Low levels of radiation exposure have also been shown to have a stimulatory effect on the immune system. Hashimoto et al. (21) implanted tumors in the leg muscles of rats. The rats were then treated in 3 groups: total-body irradiation, local irradiation of the implant site, and no radiation (control). Those rats receiving 2 Gy (200 rad) total-body irradiation demonstrated fewer metastases, more CD8+ T-lymphocytes in the spleen, and more lymphocytes infiltrating the tumor compared with the local irradiation and control groups. Neither radiation regimen had any effect on the growth rate of the implanted tumor itself.
Several other lines of evidence demonstrate the hormetic effect of radiation. The survivors of the atomic bombs at Hiroshima and Nagasaki, as a group, are living longer than a control group (5). In the 1940s and 1950s, Lorenz and colleagues exposed mice and guinea pigs to 110 mR/d until their natural deaths. The exposed animals had longer life spans by 2%–14% and 50% greater body weight than unexposed controls (22).
CONSEQUENCES OF ACCEPTANCE OF THE RADIATION HORMESIS PARADIGM
Given the significant body of evidence in its favor, it is appropriate to consider the changes that could occur in our ways of dealing with radiation if the radiation hormesis hypothesis were to become generally accepted. One example of the effect of its acceptance in the nuclear medicine department would be in the use of syringe shields. Syringe shields have always been required for injections, except under extenuating circumstances, most often when a vein is difficult to access. However, the time period in which a syringe shield is effective is very short, and injections may be easier and quicker without syringe shields. Most syringe shields are costly and easily broken. Acceptance of radiation hormesis might make them less necessary. It is notable that the latest revision of the Nuclear Regulatory Commission regulations governing medical licensees, which took effect in October 2002, no longer mentions syringe shields (23).
Radiation hormesis could have a larger impact on several environmental issues that relate to nuclear medicine. One example is the disposal of low-level radioactive waste. Much basic science research is performed using 3H and 14C as tracers. These isotopes are also commonly used in the early stages of radiopharmaceutical development. All of this waste must now be disposed of as radioactive, at considerable cost. However, both isotopes emit only low-energy β- particles. Under the radiation hormesis paradigm, these wastes could be disposed of as nonradioactive medical wastes.
A second example is the regulation of nuclear reactors, which are used for radioisotope production as well as power generation. The construction of a new nuclear reactor must meet Environmental Protection Agency limits for exposure to the general public. The current limit, 1 mSv/y (100 mrem/y) (24), is much lower than the limits on occupational exposure and less than the average annual radiation exposure received from natural sources. Acceptance of the radiation hormesis paradigm might allow this level to increase, making it less difficult and costly to build nuclear reactors, which, in turn could increase the availability of radioisotopes for medical purposes.
U.S. regulations aimed at reducing health risks from radiation have associated costs in the billions of dollars. These costs have essentially no demonstrable benefit and, in fact, may have significant deleterious effects according to proponents of the radiation hormesis hypothesis (25,26).
CONCLUSION
The “great debate” over radiation hormesis continues, and respected scientists take sides in opposition to one another over this issue (5). In addition to contradictory interpretations of the results of experimental and epidemiologic studies, argument continues over the definition of radiation hormesis and the ability of science to prove its effects definitively. Because the effects of low-level radiation are slight in comparison with other risks incurred in modern society, it is doubtful that this debate will ever be resolved conclusively. Because this is true, it is unlikely that the current radiation protection regulations will be loosened to any significant extent.
The United Nations committee that originally proposed the LNT hypothesis has shown some softening on the issue. The 1994 UNSCEAR report concluded that the phenomenon of radiation hormesis has been proven at the cellular level and recommended further research on radiation-induced adaptive responses (27). The NCRP recently completed an evaluation of the LNT hypothesis and concluded “that there is no conclusive evidence on which to reject the assumption” of an LNT dose-response relationship (28).
This picture looks different in terms of research funding. The U.S. Department of Energy has begun a new program to research the effects of low doses of radiation exposure. Approximately $20 million over 10 y has been appropriated for research projects on low-level radiation effects at the molecular level (29). Many proponents of radiation hormesis are viewing this move as a bureaucratic “first step” toward acceptance of this hypothesis.
Regulations relating to radiation exposure continue to reflect the societal perceptions about its dangers. Just as the regulatory environment became more restrictive in the 1950s and 1960s with an increased understanding of the harmful effects of radiation, so it may in the future become less restrictive if the concepts of radiation hormesis become more accepted. An instructive example of the way in which changes in public attitudes result in regulatory revisions can be seen in the regulation of saccharin (30).
Saccharin is an artificial sweetener developed in the 1960s. After being put on the market, it was found to cause cancer when given to laboratory rats in large quantities. It was banned under the Delaney Clause (passed by Congress in 1958), which stated that “no carcinogen shall be deliberately added to or found as a contaminant in food.” The dosage of saccharin that produces cancer, however, is the human equivalent of 1,600 cans of soft drink per day, compared with an average human consumption of 3 or fewer cans per day. The Delaney Clause was repealed in 1996 and was replaced with the less restrictive standard of a “reasonable certainty of no harm.” This statement reflects our greater understanding that specific risks must be viewed in the context of the large number of other risks we willingly take. Similarly, it makes sense to regulate radiation exposure based on its risk of causing harm within the context of the magnitude of other risks present in our society.
For nuclear medicine technologists, the radiation hormesis hypothesis, even in its current unproven state, holds 2 specific benefits. The first is as a counterweight to the fears that the general public and our patients have about radiation. Take, for example, the questions that patients commonly raise about the harmful effects of a diagnostic nuclear medicine study. A nuclear medicine technologist might indicate that there is a diversity of opinion on this subject and could cite scientific evidence suggesting that radiation has beneficial effects at the level of radiation doses used in diagnostic examinations. This will assist our patients in understanding that radiation, like most things in life, has both risks and benefits and is not to be feared inordinately.
The second benefit is to our peace of mind as occupationally exposed persons. Based on the evidence cited, we can be comfortable with our current levels of radiation exposure, knowing that the radiation hormesis paradigm supports the conclusion that these levels are not harmful. An important caution: Nuclear medicine technologists should continue to practice good radiation protection techniques using time, distance, and shielding and to limit exposure when it is unnecessary, because radiation exposure in large amounts is harmful, even in the radiation hormesis hypothesis. But it is worthwhile to consider the radiation hormesis paradigm, which would answer in the affirmative the question: “Could all that radiation be good for us?”
Footnotes
For correspondence or reprints contact: Jennifer L. Prekeges, MS, CNMT, Nuclear Medicine, C5-NM, Virginia Mason Medical Center, P.O. Box 900, Seattle, WA 98111-0900.
E-mail address:radjlp{at}vmmc.org