Keywords

FormalPara Learning Objectives
  • To understand different scenarios for human exposure to ionizing radiation excluding medical treatment procedures such as those for cancer treatment.

  • To understand long-term health effects associated with low-dose radiation exposure.

  • To gain knowledge on the worldwide distribution of indoor concentration of radon.

  • To understand how naturally occurring radon affects human health.

  • To understand the critical need for immediate triage and to learn about current triage tools for radiation exposure categorization of radiation accident victims.

  • To gain a better understanding of internal contamination and decontamination.

  • To gain knowledge on clinical consequences of early and delayed effects of acute exposure to high doses of ionizing radiation.

  • To become familiar with the characteristic features of the three sub-classes of acute radiation syndromes (hematologic, gastrointestinal, and neurovascular), as well as effects on skin and lungs.

  • To explain the basis for and biological meaning of LD50.

8.1 Radiation Exposure Scenarios

8.1.1 Introduction

Individuals can be exposed to ionizing radiation in many accidental or intended situations with different ranges of dose, dose rate, and radiation quality. Human exposures can occur directly from external radiation sources or through either internal or external contamination with radioactive materials/substances. In certain circumstances, external radiation exposure may occur concomitantly with external or internal contamination (Fig. 8.1). While the radiation dose from external exposure or internal contamination could be substantial, health risk from external contamination is highly dependent on the penetrating ability of the radionuclide. Since alpha particles can be effectively blocked by a piece of paper or by the upper layer of the skin, risk of external contamination by alpha particles is expected to be negligible. Most external contamination can be eliminated either by cleansing and/or by removing contaminated clothes.

Fig. 8.1
An illustration of the exposure and contamination scenarios of radiation. In external exposure, radiation passes through the body and causes damage. In external contamination, the radiation damages the surface of the body. In the internal contamination, the radiation is incorporated into the body.

External exposure and contamination

The following section lists the types of exposure scenarios with specific examples.

8.1.2 Medical Radiation Exposures to Patients

Aside from radiation oncology, which has been well described in Chap. 6, medical exposures can occur from diagnostic and therapeutic procedures other than cancer treatment. These procedures can cause exposures to the patient but also exposure in utero to the embryo or fetus. Worldwide medical exposures account for almost 20% of the average human exposure from all the sources [1].

8.1.2.1 Diagnostic Radiology

Doses from diagnostic radiology procedures range from very low doses in dental radiography to higher doses from computed tomography or fluoroscopy procedures. In general, radiation doses from diagnostic procedures tend to be low and are therefore unlikely to cause deterministic effects (discussed in Sects. 2.7.2 and 8.2), but especially repeated fluoroscopy-guided procedures like angioplasty may result in substantial skin dose to the patients (see also Sect. 8.1.6). Table 8.1 lists the effective doses (see definition in Sect. 8.7) associated with each of the commonly used diagnostic procedures, but these doses can vary among different countries. The total number of these procedures conducted worldwide to date, is around 4 billion (2.6 billion for radiography, 1.1 billion for dental radiography and 400 million for CT) and the number has been steadily increasing over the past 25 years, especially for CT [2].

Table 8.1 Examples of each type of diagnostic procedures and their typical doses (reproduced with permission from [2])

8.1.2.2 Radiation Treatment (Non-cancer)

There have been many instances of therapeutic exposures unrelated to cancer treatment in the past including patients treated with radiation for ankylosing spondylitis (total body dose of 0.86–4.62 Gy) to relieve pain and children treated for tinea capitis (ringworm of the scalp) with a brain dose ranging from 0.75 to 1.7 Gy. There is evidence of increased cancers in these populations [4, 5] and alternative treatments have now been adopted that do not involve radiation. Presently, radiotherapy is used in procedures such as to treat benign tumors, pain relief for arthritis, arteriovenous malformations as shown in Fig. 8.2 [6]. These procedures deliver a range of doses from 5 to 60 Gy which can be delivered in single or multiple fractions.

Fig. 8.2
A cyclic chart of the percent for non-malignant conditions. Keloid, 78. Graves' orbitopathy, 69. Heterotopic bone formation, 61. Desmoid and aggressive fibromatosis, 54. Pterygium and Arteriovenous malformation, 41. Histiocytosis, 38. Arthrosis, 37. Nasopharyngeal angiofibroma, 33. Tendinitis, 32.

Non-malignant conditions most commonly treated with radiation therapy as a percentage of all international radiotherapy institutes surveyed (n = 508). (Data extracted with permission from [6])

8.1.3 Occupational Exposures

Radiation exposures can occur in many occupational settings with the highest average effective doses reported in the nuclear sector although this shows a steadily decreasing trend due to increased knowledge about the effects of radiation and better radiation protection practices over the past decades. Figure 8.3 shows data for the occupational exposures over a 27-year period. The average effective dose per worker is the highest in the nuclear sector. Due to the high number of workers in the medical field (~7500 in 2002) compared to the nuclear sector (~660 in 2002) and the industrial sector (~850 in 2002), collective exposures are the highest in the medical field, followed by those working in nuclear power and industrial uses of radiation.

Fig. 8.3
2 grouped bar graphs of annual effective dose and collective effective dose versus years. The values are plotted for industrial, medical, military, nuclear, and other. 1. Nuclear has the highest value from 1975 to 1979 at 4.4. 2. The medical has the highest value from 1995 to 2002 at 3500.

Data for estimated occupational exposures from 1975 to 2002. (Reproduced with permission from [1, 7])

8.1.3.1 Exposures to Medical Staff or Personnel

Out of all the occupational exposures, the medical profession makes up the single largest group of workers exposed in the workplace. This group encompasses nurses, doctors, technicians, and other support workers. The procedures mostly comprise diagnostic imaging and radiation therapy, which have been increasing yearly as technology develops and the benefits become more widespread. Table 8.2 shows some of the specific medical professions with the highest average exposures based on dosimeter readings, in Canada. These values will vary from country to country but are similar in countries with comparable level of health care.

Table 8.2 Selection of the highest exposed medical occupations of monitored workers (reproduced with permission from [8])

Occupational dose in diagnostic radiology is quite variable due to the wide range of technologies available. For example, most CT technologists have no measurable dose while the individual effective dose for interventional procedures such as vascular surgery supported by fluoroscopy is significant and medical doctors performing these procedures are the most occupationally exposed group from diagnostic radiation. Depending on the procedure, the occupational dose can range from 0.008–2 mSv per interventional procedure. Diagnostic radiation is also frequently used in dental clinics; therefore, the number of devices and workers exposed is extremely large. The average annual effective dose in dental radiology has been decreasing over the last few decades from 0.32 mSv in the late 1970s to 0.06 Sv in the early 1990s due to improved equipment [1]

Nuclear medicine involves the use of radionuclides, particularly 99mTc, to investigate physiological process and organ function. Occupational exposures result from personnel having to be in close contact with patients when injecting them and while positioning them during which time they can be exposed to gamma radiation emitted by the radionuclides. Preparation of the radionuclides can also result in high exposures with annual doses up to 5 mSv and doses to the hands and fingers up to 500 mSv. There are several other nuclear medicine techniques with different exposures such as positron emission tomography using 18F labeled fluorodeoxyglucose and thyroid treatment with 131I to name a few. Worldwide, the annual collective effective dose is on the order of 85 man Sv and had been increasing over the years with increasing number of workers in this field. However, the annual collective dose is no longer increasing [1] since the 1980s as the average annual effective dose was reduced from 1 mSv down to about 0.75 mSv.

Radiotherapy for treating malignant disease delivers the highest dose to the patient; however, occupational doses in this setting remain very low. These procedures have been well described in Chap. 5. The collective annual dose in radiotherapists has decreased substantially since the 1970s despite the increase in workers in this field. This is due to a large drop in the annual average effective dose per worker.

8.1.3.2 Nuclear Workers

Workers throughout the nuclear fuel cycle are exposed to ionizing radiation, from mining, through milling, enrichment, fuel fabrication, reactor operation, and reprocessing [1]. This group of workers is most closely monitored for their radiation exposure. As there are more workers in mining and reactor operation, the collective effective dose is the highest in this group. The average annual effective dose for nuclear workers has also been decreasing steadily since the mid-1970s from 4.1 to 1.0 mSv currently, and the collective effective dose has decreased since the 1980s from 2500 to 800 man Sv (Fig. 8.4). These reductions are due to implementation of ALARA programs that have improved plant designs, implemented upgrades, and improved operational procedures [1].

Fig. 8.4
3 grouped bar graphs of annual effective dose and collective effective dose versus practice. 1. The milling has the highest value from 1975 to 1979. 2. The mining has the highest value from 1980 to 1984. 3. The reactor operation has the highest value from 1990 to 1994. Values are estimated.figure 4

Global trends in the number of monitored workers, and in collective effective doses and effective doses to workers for different practices of the nuclear fuel cycle. (Reproduced with permission from [1])

8.1.3.3 Industrial Radiography

Industrial radiography is a non-destructive method used to look at defects in materials such as welded pipeline and castings. This can involve the use of X-rays or gamma ray sources sealed in capsules (e.g., 60Co and 192Ir). Radiation penetrates the object being examined and exposes a detection system behind the object. The devices used are designed to protect the operator and annual effective doses to the workers under normal use are less than 0.5 mSv.

8.1.3.4 Military

Most military exposures results from the fabrication and testing of nuclear weapons, the use of nuclear energy on naval vessels, and the use of ionizing radiation for activities similar to those used in civilian applications (e.g., research, transport, and non-destructive testing). Data from the USA indicates that the average annual effective dose in monitored military individuals form all military activities is on the order of a few tens of mSv. There has, however, been a substantial decrease in the average collective doses since the 1970s where annual effective doses to monitor military workers were as high as 1 mSv [1].

8.1.4 Elevated Exposure to Natural Sources

Enhanced levels of natural radiation are found in several occupational settings. Because the radiation is naturally occurring, workers are not routinely monitored so exposure levels are not well known. Miners make up a large group of these occupational exposures and their estimated collective dose is about 30,000 man Sv [9]. Air crew make up another group of workers exposed to naturally occurring radiation and have been identified as one of the most highly exposed professional groups with exposure levels of 3–8 μSv/h during the flight depending on latitude and altitude. Worldwide, the estimated collective effective dose to aircrew is about 900 man Sv. Overall, there are about 13 million workers worldwide exposed to natural sources of radiation with an estimated average effective dose of 2.9 mSv and an estimated collective effective dose of 37,260 man Sv. This average effective dose from natural exposures is not decreasing as much as with man-made exposures, however, as the number of workers is increasing, the collective dose has been rising between the early 1990s and early 2000s [9].

8.1.5 Miscellaneous

In addition to those mentioned above, there are a number of other professions where radiation might be involved. These include, but are not limited to, research in academic institutions, management of spent radioactive sources and transport of radioactive material. Academic institutions make up 92% for the monitored workers in this category and about 87% of the collective dose. Overall, the average annual effective dose for all monitored workers in this category is less than 1 mSv and doses, decreasing from 0.5 to 0.1 mSv between 1975 and 2004 [9].

8.1.6 Accidental Exposures

8.1.6.1 Medical Accidents

Unintended exposures in medicine are defined as exposures that differ significantly from the exposure intended for the given purpose and are considered medical errors. These events can include operator errors, equipment failures, and other mishaps with consequences that can range from less to more severe. Events can occur with both diagnostic and therapeutic procedures and may also result in unintended doses to an embryo or fetus. The most serious overexposures can result in doses to the skin that are high enough to cause tissue reactions. These typically arise from CT and interventional fluoroscopy procedures, most notably from perfusion studies [10].

8.1.6.2 Nuclear Power Plant Accidents

Despite the adoption of safety measures to reduce the risk of accidents at Nuclear Power Plants (NPPs), there have been several accidents as well as near misses with varying degrees of impact and radiation exposure to workers and the general population [11]. These accidents are characterized by the release of large amounts of radionuclides with relatively short half-lives [12]. Three such past incidents of high impact are Three Mile Island in 1979 [13], Chernobyl in 1986 [14], and Fukushima in 2011 [15]. Accidents in NPPs can result in high doses to a small population of clean-up workers (e.g., Chernobyl) as well as small doses to a large population living in the vicinity of the NPP (e.g., Fukushima). These NPP accidents, along with other accidents, can be rated according to the International Nuclear Event scale (INES) based on severity and impact of the incident (Fig. 8.5). Accidents can also occur during the transportation of nuclear waste by road or rail with the primary concern of exposure for this waste being 137Cs, a γ-emitter. Although the fuel is well packaged during shipment, the amount of radioactive material may be on the order of PBq per shipment container, so any dispersal would be catastrophic. In general, occupational exposures tend to be low doses and low-dose rates.

Fig. 8.5
A pyramid chart exhibits the 8 layers, from 0 to 7 of N P Ps accidents with its events. It includes below scale, anomaly, incident, serious incident, accident with local consequences, with wider consequences, serious accident, and major accident.

International Nuclear Event scale based on severity and impact of the incident

8.1.6.3 Industrial Radiography

During industrial radiography, accidents can occur multiple ways: loss of control of the source of radiation, damage to the source, direct contact with the source or improper use of shielding [16]. Even when operating procedures are correctly followed, dose rates close to the source can be very high causing overexposures in a matter of seconds. Table 8.3 lists a few examples of accidents due to inadequate regulatory control, failure to follow operational procedures, inadequate training, inadequate maintenance, and human error.

Table 8.3 Selected accidents in industrial radiography

8.1.6.4 Other Accidental Exposures

An orphaned source is a self-contained radioactive source that is no longer under proper regulatory control. These sources can come from both therapeutic and industrial radiation machines and can have activities in the TBq range. As long as they remain sealed, they do not cause contamination but when opened can cause high doses and extreme health effects and even death, due to their high activity such as occurred in Thailand in 2000 [22]. Their containment can also become compromised, spreading radioactive material over large areas as occurred in Goiania, Brazil in 1987 [23].

8.1.7 Malicious Exposures

The health consequences after an accidental exposure to radiation will depend on the exposure scenario. Although there is a long list of attacks that could involve radiation, the following three are considered the most probable.

8.1.7.1 Improvized Nuclear Devices (INDs)

INDs incorporate nuclear material that can produce nuclear explosions. This can cause extensive blast (mechanical), thermal, and radiation injuries with large numbers of fatalities and casualties as well as high doses of radiation to potentially large numbers of individuals, when detonated at or close to a major city. The radiation injury can be a result of the prompt radiation within minutes near the epicenter of the explosion that is predominantly from γ-rays and neutrons. Delayed exposures can result from fallout that is produced by fission products and neutron-induced radionuclides and are dispersed downwind from the epicenter. Finally, ground shine can result from the deposition of radionuclides on the ground of the fallout area that is highly dependent on the wind direction and speed (Fig. 8.6). INDs are considered highly unlikely but possible to be used; hence, it is necessary to be prepared for such events. The result of such an event would be catastrophic. Thousands of people could be killed by the blast and heat, hundreds to thousands could be killed or made ill by radiation effects, and thousands could have an increased long-term risk of leukemia or solid cancer. Furthermore, the psychological and infrastructure effects would also be enormous [24].

Fig. 8.6
A graph plots the kilometers from ground zone with the radioactive fallout pattern. It has 4 concentric circles with their centers at 0. The first circle denotes ground zero, the second one denotes dangerous fallout, the third one denotes moderate damage, and the outer circle denotes light damage.

Approximate prompt and delayed (fallout) effects from a 10-kT detonation. (Reproduced with permission from Lawrence Livermore National Laboratory)

The population that has had the greatest impact on risk assessment is the A-bomb survivors. A large population of Japanese were exposed in 1945 during an atomic bomb attack in both Hiroshima and Nagasaki. This cohort comprises the Life Span Study (LSS) that includes 94,000 in-city subjects of all ages and sex with dose estimates ranging up to 4 Sv. There has been a long-term follow-up on this cohort, allowing for high quality mortality and cancer incidence data [25]. The majority of survivors were exposed to doses less than 0.1 Sv and, therefore, provide excellent data in the dose range of interest for radiation protection. This cohort also provided data on in utero and early childhood exposures [26].

8.1.7.2 Radiological Dispersal Devices (RDDs)

RDDs use explosives or mechanical devices to distribute radiological material resulting in radioactive contamination. This is considered a more likely scenario than an IND. With RDDs, a relatively small area would be affected and radiation exposures could take the form of both internal and external contamination; however, exposures are expected to be lower than medically significant. Most likely, a small number of individuals will be contaminated with radioactive material.

8.1.7.3 Radiological Exposure Devices (REDs)

REDs involve hidden sealed sources designed to expose people to significant doses without their knowledge and without causing contamination. They are usually hidden in a busy public location, such as under a seat on a bus or in a sports stadium and could remain undetected for long periods. Individuals who come close to these sources can receive significant localized doses but numbers of highly exposed individual are anticipated to be low.

8.2 Long-Term Health Effects of Low-Dose Radiation in Exposed Human Populations

8.2.1 Radiation Effects in the Developing Embryo and Fetus

It is generally accepted that the developing embryo and fetus are more radiosensitive than children or adults. In common with other health effects, at low doses (<100 mGy), stochastic risk is the main driver to protect the fetus (see Chap. 1). Deterministic effects or tissue reactions—mainly central nervous system effects and congenital malformations—are reported for higher doses; however, the evidence is somewhat sparse.

Evidence for fetal radiation effects comes mostly from animal studies performed with high doses of in utero radiation. Evidence is limited from the larger scale population exposures such as those of the A-bomb survivors, as well as from other small-scale accidents, and medical uses of radiation (e.g., Gilbert [26]). The relevant animal data suggest thresholds for non-cancer effects including small fetal size, microcephaly, and intellectual disability (see also Sect. 2.7.2). However, due to interspecies differences and different selection pressures, it is impossible to draw conclusions pertaining to the levels of such effects in human studies. As such, in order to draw conclusions for radiation protection purposes at least, epidemiological studies are more reliable.

The human data, however, are limited. There is only one epidemiological study that has been able to provide evidence of brain damage in humans following in utero exposure. From about 10,000 woman who were pregnant at the time of the atomic bombs at Hiroshima and Nagasaki, the children of about 1700 of them have been followed into adulthood. The study identified 27 children with severe “mental retardation” (now more commonly termed intellectual disability), 30 children with small head size without apparent intellectual effects, 24 children who suffered from seizures which appear to have no clinically identifiable precipitating cause, and a larger group of children with reduced intelligence (IQ) scores or with lower than expected scholastic achievement in school, compared to the unexposed population.

While the sample sizes were small and hence the uncertainties were large, the key finding—still much quoted today—was that neurocognitive effects were only observed for those exposed for doses >~0.5 Gy, and only during the 8–15 post-conception period, corresponding to the key period of neurogenesis and neuronal migration [27,28,29].

For earlier stages of embryogenesis, there is some evidence that preimplantation exposure to doses below 100 mGy may lead to miscarriage. During the major period of organogenesis, approximately 2–15 weeks post-conception, exposures on the order of 0.25 Gy may lead to smaller head sizes and the associated reduction in intellect, with this period also being particularly sensitive for induction of cancer. Post 15 weeks, the threshold for increased risk of cancer would appear to be on the order of 100 mGy, and the threshold for severe intellectual disability is still ~500 mGy (Fig. 8.7).

Fig. 8.7
A comparison of the possible deterministic effects of radiation with days post conception for pre, implantation, major organogenesis, and fetal growth. 4 curves for prenatal death, neuropathology, malformations, and growth retardation risk factors follow a small peak and a decreasing trend.

Relationship between ionizing radiation induced tissue effects and fetal/embryo stage of development. (Reproduced with permission from [30])

In 2001, UNSCEAR concluded there was no definite adverse pregnancy outcomes (malformations, stillbirths, premature births) related to the exposure from the Chernobyl accident. However, in more recent years, there is evidence that 131I internalized by pregnant women following Chernobyl crossed the placenta and resulted in thyroid cancer in their children. Children born to Chernobyl 131I exposed individuals also had dose-dependent longer gestational periods, smaller head, and chest sizes, but normal birth weights. While stunted cerebral growth during critical periods of neurogenesis accounts for microcephaly and the related developmental effects, the biological mechanisms behind the effect on gestational period is still largely unknown [7].

Fetal death following exposure in utero appears only to occur following doses >2 Gy; however, most of the evidence for this still comes from animal studies. There is also limited evidence linking fetal radon exposure to increased risk of disease. For example, excess brain cancer has been observed in children born to pregnant women drinking water with high levels of radon [31].

In terms of cancer risk, there is a clear link between doses received in utero and childhood and adult cancers, including childhood leukemia. In the A-bomb survivors exposed in utero, the most recent evidence using individual estimates of mother’s weighted absorbed uterine dose supports a continued increased risk of solid cancer mortality in females but, not in males. As with the previous data, the effects of radiation on non-cancer disease mortality in this cohort appeared to be mediated through small head size and low birth weight, but also parental survival status. The most recent data suggest that the excess risk of childhood cancer (up to 15 years of age) is on the order of 6% per Gy, with approximately half of the cases being fatal. These data are summarized in Table 8.4.

Table 8.4 Health effects as a function of gestational age for humans (reproduced with permission from [29])

It is worth noting that on the basis of the current (albeit limited) evidence, for occupational radiation protection purposes in the UK as in many other countries, the unborn fetus is treated as a member of the public, hence the effective dose limit is 1 mSv/year.

8.2.2 Radiation-Induced Heritable Diseases

8.2.2.1 Context and Definition

Mutations occur naturally in somatic and germ cells potentially leading to cancers and heritable genetic diseases, respectively. In 1927, Muller and colleagues initially showed the mutagenic effects of X-rays in Drosophila, which were rapidly followed by similar findings reported for other radiation types and organisms. These experimental animal data established the concept of genetic damage-inducing effects of radiation. However, concerns appeared about these genetic effects in large numbers of people, especially after the exposure of people to the detonation of atomic bombs. The UNSCEAR and the BEIR committees decided to follow the potential heritable effects of radiation in the exposed Japanese population, even if other environmental factors can interfere. Indeed, the goal pursued by both committees is to predict additional risk of genetic diseases in humans exposed to radiation. However, no association between radiation exposure and the occurrence of hereditable effects has been observed in humans to date [7]. Like cancers, genetically, diseases such as hemophilia, color-blindness, and congenital abnormalities do not arise specifically from ionizing radiation, but also occur spontaneously or due to other environmental and/or genotoxic factors without any specific clinical appearance.

The concept of “radiation inducible genetic diseases” relies on different parameters. Indeed, every cell contains genetic material in the form of DNA, and mutations observed in DNA may lead to a genetic disease such as malformations, metabolic disorders, or immune deficiencies. Sometimes, however, when mutations are induced in gonads or germ cells (oocytes or sperm or their precursors) of an exposed individual, hereditable effects occur in their offspring. To induce a genetically abnormal offspring, the mutation must successfully pass through many cell divisions to form a viable live-born infant. Further, to be of genetic significance, gonadal exposure must occur before or during the person’s reproductive period. It gives rise to the concept of genetically significant dose. Thus exposure to, for example, a post-menopausal woman, or someone who never intends to have children, carries no associated “heritable” risk [7, 32].

8.2.2.2 Extrapolation from Mice Data and in Humans

It is important to note that ionizing radiation does not produce new types of genetic diseases or new unique mutations but is assumed to increase the incidence of the same mutations that occur spontaneously. It increases the incidence of the spectrum of known diseases in the population. Hence, it is important, as far as possible, to have a good understanding of the background risks. There are very little direct human data on radiation-induced genetic disease. Pieces of evidence appeared for the heritable genetic effects of radiation almost entirely from animal experiments initially performed at Oak Ridge National Laboratory through the mega mouse project (7 million mice studied to determine specific locus mutation rates in the mice). Animal experiments have led to the development of relevant concepts including the doubling dose, i.e., the dose required to double the background frequency of genetic conditions detectable in the newborn population [33]. This project leads to five main conclusions: (1) a significant factor of about 35 for the radiosensitivity of different mutations; (2) a dose rate effect with fewer mutations induced by chronic exposure compared with acute ones; (3) an exquisite radiosensitivity of the oocytes; (4) reduction of the genetics effects of a given dose when there was a time interval between exposure and conceptions; (5) differences between male and female mice but with a doubling dose on the order of 1 Gy for protracted exposures. Given that conclusions on the background frequency, life span, selection pressures, and spectrum of genetic disease in the laboratory mouse are very different from humans, caution must be applied in using these data for human radiation protection purposes. Nevertheless, such data are important.

Estimates then need to be made for the mutational component of classes of human genetic diseases and clearly, this is considerably different between dominant gene disorders and multifactorial conditions. The selection pressures on mutations being lost by death during embryo/fetus development also need to be assessed. Finally, an assessment of the transmissibility of abnormalities through further generations needs to be made [7].

8.2.2.3 Diseases Classes and Influencing Factors

Evolution depends on the existence of mutations, with beneficial mutations conferring an advantage. However, their random nature ensures that the vast majority of mutations are harmful. Alterations can concern genes or point mutations to the DNA code and chromosomal aberrations.

8.2.2.3.1 Mendelian Diseases

Diseases caused by mutations in single genes are known as Mendelian diseases. The majority (67%) are caused predominantly by point mutations (base-pair changes in the DNA), followed at 22% by both point mutations and DNA deletions within genes, and by intragenic deletions and large deletions at 13%. They are divided into autosomal dominant, autosomal recessive, and X-linked depending on the chromosomal location and the phenotype resulting from the transmission.

  • Autosomal dominant: Dominant conditions are where even in the heterozygote state (a person inheriting one mutant and one normal gene) the abnormality is seen in the individual. Their effects in the homozygote (double dose of the mutant gene) are usually more severe, if not lethal. They are expressed in the first generation after its occurrence. An example of a dominant gene condition is Huntingdon’s chorea (HC), which is characterized by nerve cell damage and changes in physical, emotional, and mental state. HC is caused by a faulty gene on chromosome 4. Other examples include achondroplasia, neurofibromatosis, Marfan syndrome, or myotonic dystrophy.

  • Autosomal recessive: Usually, this condition requires homozygosity, which means two mutant genes at the same locus, to produce the trait disease. The mutant gene must be inherited from each parent. Recessive disorders are usually rare, as the mutation would need to be inherited from both parents. However, some recessive genes even when present in a single dose, i.e., heterozygote accompanied by a dominant normal gene do still confer slight deleterious effects. An example of a recessive gene disorder is cystic fibrosis, which is caused by mutations on a gene located on chromosome 7. Other examples include phenylketonuria, hemochromatosis, Bloom’s syndrome, and ataxia-telangiectasia.

  • X-linked: Disorders involve genes located on the X chromosome. A large proportion of mutations that are inherited are related to the X chromosome. Since there is only one X chromosome in males, mutant genes here act as dominant genes in males who suffer whereas they are masked in the female with two X chromosomes who act as carriers. Mutations in these genes will exert their effect in females only when present in homozygotes and therefore appear as a recessive condition. Half the male offspring of a carrier mother will suffer and half her female offspring will be carriers. Examples of sex-linked conditions are color-blindness and hemophilia.

8.2.2.3.2 Chromosome Aberrations

Chromosome aberrations are generally structural or numerical alterations that are microscopically visible/detectable (Fig. 8.8) and efficiently caused by radiation. Many chromosomal abnormalities are not compatible with life and are lost as spontaneous abortions. They correspond to 40% of spontaneous abortions and 6% of stillbirths. However, there are exceptions, and the evidence suggests that abnormalities of the sex chromosomes do tend to be transmitted. Examples include Downs syndrome, which is a trisomy of chromosome 21, as well as Turner’s syndrome, which is a monosomy of chromosome X. Turner’s syndrome individuals are, however, sterile. It is also interesting to note that the X chromosome is dominant, but (a single gene on) the Y chromosome determines sex.

Fig. 8.8
2 microscopic images of the chromosomes. A has several dicentric and tricentric chromosomes, while b has radiation-induced chromosomes.

Examples metaphase spreads with (a) dicentrics tri-centrics and several fragments and (b) with a translocation. These aberrations result from the fusion of sections of broken chromosomes

8.2.2.3.3 Multifactorial (Congenital Abnormalities, Chronic Diseases)

Multifactorial diseases are an additional class of effect, which combine heritable aspects (genetic components) in addition to influence from environmental factors. Their transmission patterns do not fit Mendelian transmission and the interrelated concepts of genetic susceptibility and risk factors are more appropriate to talk about these multifactorial diseases. Chronic conditions which arise later in life, for example, type II diabetes, tend to occur after an individual has already had children. However, in such cases, individuals may inherit predisposition but may never suffer from the disease. Multifactorial diseases also include congenital abnormalities which are present at birth. An example is cleft lip and palate where most sufferers are missing a part of chromosome 22—this abnormality can be inherited but in most cases the cause of the deletion is unknown.

8.2.2.3.4 Epigenetics and Imprinted Genes

Furthermore, epigenetic changes are now considered for their involvement in radiation-induced heritable disease. These changes involve molecular modifications such as DNA methylation or changes in the chromatin packaging of DNA by post-translational histone modifications that can modulate gene expression without any DNA sequence alterations. Exposure to environmental factors at prenatal and early postnatal stages can alter the epigenetic programming thereby increasing the risk of developing the disease later.

Additionally, expression of the imprinted gene in the current generation depends more and more on the environment experienced by the previous generation. Only one parental allele with the other allele silenced can lead to a non-Mendelian germline inherited form of gene regulation (heritable DNA methylation and histone modification) (Box 8.1).

Box 8.1 Gene Mutations and Heritable Diseases

  • Gene mutations are molecular, sub-microscopic, changes affecting the functionality of one or more gene-specific loci.

  • There are three classes of Mendelian type gene mutations, where genes are inherited from each parent.

  • Other parameters such environment may lead to radiation-induced genetic heritable diseases.

8.2.2.4 UNSCEAR and ICRP

The assessment of radiation risks in progeny for heritable effects is thus a complex task. However, this has been done by UNSCEAR and ICRP reports [7, 32]. It is important to note that the data used to make the risk calculations are uncertain, with several assumptions, hence the ranges. From these data, the ICRP assumes that the exposure to radiation of a parent to a single gonadal dose of 1 Gy is responsible for 1 additional severe disease caused by radiation-induced mutations in 500 births, with a genetic risk that may last for up to 2 generations. With chronic exposure of gonad to 1 Gy, this proportion reaches 1 for 100 births, and heritable effects may persist for several generations. In this report, the total risk for genetic diseases estimated was about 3000 to 4700 cases per million first-generation progeny per Gy. The outcome of the risk calculations, in the form of risks per Gy per million live-born children, are given in Table 8.5.

Table 8.5 Genetic risk from one-generation exposure to low LET low-dose or chronic irradiation with assumed doubling dose of 1 Gy (reproduced with permission from [7])

For risk estimation, the effects of high-dose irradiations have to be investigated in animal experiments. The effects of low radiation doses on humans, which are difficult to measure unequivocally, have to be inferred from these results. How these data are applied in radiation protection is then the responsibility of ICRP, who averages and combines the risks in Table 8.6 to generate a single risk estimate for all the genetic effects, for both the reproductive and total populations.

Table 8.6 Percentage risk per Gy for the reproductive and total population and up to two generations when the population sustains radiation exposure generation after generation (reproduced with permission from [34])

In this case, ICRP assumes that people on average live to age 75 years and cease breeding by age 30 years. The genetically significant dose is therefore 40% (30/75) of the total population dose. For radiation workers, who ICRP assumes to begin working at 18 years and finish having children by age 30, the work-specific heritable risk is further reduced, as illustrated in Table 8.7.

Table 8.7 ICRP recommended genetic risk coefficients for low dose or low-dose-rate low-LET radiation (reproduced with permission from [34])

8.2.3 Long-Term Issue Effects: Cataract and CVD

8.2.3.1 Radiation-Induced Cataract

Cataract is the most common cause of blindness worldwide World Health Organization (WHO). There are three types of cataracts: nuclear cataract, which is characterized by hardening and opacification of the lens nucleus; cortical opacities, which are initiated at the lens cortices and which then form characteristic “spokes” pointed towards the center of the lens, and posterior subcapsular cataract, which develops on the capsule, at the posterior pole of the lens. The subcapsular cataracts are most readily associated in the epidemiological literature with radiation [32] (Fig. 8.9).

Fig. 8.9
3 illustrations. A exhibits the lens structure with the effects of radiation induced cataract. B. 2 eyeball structures. 1 with proper focus of light into the retina and the other with improper focus of light due to cataract. C has the posterior view of a healthy and cataractous lens.

Protein fiber and cellular organization within the lens. (a) The lens is formed from a single cell layer of lens epithelial cells (LECs) that covers the anterior portion of the lens. The cells in the central region are mostly quiescent; meanwhile the proliferating cells are largely confined to the germinative zone (GZ) in the equator of the lens. After division, LECs migrate to the transitional zone (TZ), situated immediately adjacent to the GZ and most distal to the anterior pole. In the TZ, LECs begin differentiation to form lens fiber cells (LFCs) that comprise the bulk of the lens mass. They enter the body of the lens via the meridional rows (MRs), adopting a hexagonal cross-sectional profile, offset from their immediate neighbors by a half cell width to deliver the most efficient cell–cell packing arrangement that is perpetuated into the lens body as LECs continue their differentiation and maturation process into LFCs. (b) The lens sits in the anterior portion of the eye where it focuses light onto the retina to create a sharp image (top). However, when a cataract develops, the transmission of light is either blocked or not focused correctly (bottom), creating a distorted image. (c) Example of lens fiber sutures as viewed from the posterior pole of the lens in the healthy lens compared to a nuclear cataract, similar to that represented in (b). (Reproduced with permission from [35])

Until relatively recently, it was thought that radiation cataract was a “deterministic” effect, now more commonly termed tissue reaction, with a threshold for acute exposures of approximately 2 Gy and a potentially much higher threshold for chronic or protracted exposures. However, in recent years it has become apparent that the latency period for radiation cataract may be many tens of years, and thus the threshold is likely to be much lower than previously thought, with the best current estimate based on the weight of epidemiological or population-based evidence that the current threshold is on the order of 0.5 Gy. However, there is some emerging evidence which suggests that radiation cataract may indeed be more stochastic in nature [32]. From the public health perspective, the high-dose response to radiation cataract is relatively clear from many years of animal studies and the smaller number of epidemiological studies reviewed in ICRP [32], and there are a number of methods of characterization and detection of cataract.

While the mechanistic data on radiation cataract remains relatively sparse compared to, say, cancer, a number of publications have looked into the radiobiological basis of cataract. In brief, the structure, function and physiology of the lens are relatively well understood, as are the processes of lens cell fiber differentiation from the lens epithelial “stem” cell layer to the functional and carefully organized lens fiber cells which allow the passage and alignment of light for effective vision [35] (Fig. 8.10). Radiation is thought to act on several different stages of this carefully balanced process, from initial oxidative stress leading to genetic (DNA) damage, the effects on the transcriptional responses in epithelial cells (and, interesting, there is evidence that genes involved have some connections with tumor forming processes), through to morphological changes apparent in the misalignment of mature fiber cells which leads to opacification and functional cataract.

Fig. 8.10
2 illustrations of a human and animal lens demonstrate how the passage and ionizing radiation mechanisms are carried out for effective vision. It includes lens biology with oxidative stress, D N A damage, post translational changes, morphology, altered signaling, and non-targeted effects.

Mechanisms of ionizing radiation response observed in human and animal lens epithelial cells or cell lines. Cx connexin, ECM extracellular matrix, FGF fibroblast growth factor, IR ionizing radiation, LEC lens epithelial cell. (Reproduced with permission from [35])

Recent work using animal models has highlighted the importance of the early phase DNA damage, proliferative, biochemical and proteomic/lipidomic responses, as well as the clear influence of genotype and pathways of response, age at exposure, sex, dose, and dose rate [36].

Further work is still needed, particularly in relation to the mechanisms of higher RBE or LET radiation for cataract. However, at the time of publication, current understanding is that radiation cataract is still best characterized for radiation protection purposes as a tissue reaction, but that low-dose chronic exposure can contribute to the “cataractogenic load” of the combined genetic and environmental factors which ultimately determines whether individuals develop cataract or not [37] (Fig. 8.11).

Fig. 8.11
3 diagrams illustrate the cataract latency and load. a. An evolution of the aging lens from age 0 to 80. P C R ranges from 0 to 30, while age-related cataract ranges from 40 to 80. B exhibits the effect of age-related cataract without exposure. C exhibits the I R induced cataract with exposure.

The latency of cataract and Lifelong Cataractogenic Load. (a) Timeline for lens aging. (b) Accumulated cataract load without exposure to ionzing radiation. (c) Accumulated cataract load after exposure to ionizing radiation (Reproduced with permission from [37])

8.2.3.2 Diseases of the Circulatory System

In addition to the acute effects on the vascular system, ionizing radiation can in the long-term influence development of cardiovascular diseases (CVD) and metabolic effects which are major risk factors for diseases of the circulatory system. This section therefore considers a number of different diseases, including atherosclerosis which can cause ischemic heart disease and cerebrovascular disease, and which can lead to acute myocardial infarction and stroke.

The effects of ionizing radiation on the circulatory system is something which has long been researched, but only within the last 10 years or so has the weight of evidence been such that it is possible to consider taking account of the radiation effects as part of the system of radiation protection [38]. Currently, circulatory disease is considered a “deterministic effect” or tissue reaction, with a threshold on the order of 0.5 Gy, and with a long latency period.

Most of the epidemiological evidence comes from exposures of medically (therapeutically or diagnostically) exposed individuals, with some data from occupational or environmentally exposed cohorts (reviewed in [39, 40]). Medical exposure to ionizing radiation during radiotherapy of thoracic tumors, such as breast cancer, Hodgkin’s lymphoma and lung cancer, can involve some incidental radiation exposure to the cardiovascular system, resulting in cardiovascular complications. This is especially an issue for women with left-sided breast cancer due to the higher cumulative dose received by the heart, which is estimated to be approximately 6.6 Gy, compared to 2.9 Gy in women with right-sided breast cancer [41]. Cardiovascular disorders due to ionizing radiation are usually not seen until 10–15 years after exposure. However, asymptomatic abnormalities may develop much earlier. This long asymptomatic period may be a reason why the radiation sensitivity of the heart has formerly been underestimated. Recently, advancements in radiotherapy and heart-sparing techniques, including target-specific dose-delivery, deep inspiration breath hold and patient prone position setup, have resulted in decreasing the mean heart exposure dose from 4.6 Gy in 2014 to 2.6 Gy in 2017, as reported from 99 worldwide studies [42]. Despite that, the mean heart radiation doses remain relatively high and late cardiovascular complications continue to occur. The late-onset aspect of ionizing radiation-induced cardiotoxicity represents a diagnostic challenge to timely initiation of radioprotective therapy. Currently, there are some efforts paid to identify early biomarkers of radiation-induced cardiotoxicity, which may help in screening patients at risk for developing cardiovascular complications after radiotherapy, thus countermeasures and early medical intervention might be applied to prevent further cardiac toxicity [43]. Furthermore, research is exploring radioprotective agents which interfere with one or more of the identified pathophysiological mechanisms of ionizing radiation-induced cardiotoxicity.

Mechanistically, as with cataract, the high-dose effects are relatively clear, based on, for example, oxidative stress, DNA damage and enhanced adhesion of endothelial cells—the genomic and proteomic basis of which is also under investigation. The lower dose studies are much less common; however, lifestyle factors and genetic susceptibility are undoubtedly confounders of development of circulatory system diseases, and indeed most of the mechanistic work carried out to date has focused on the genetic basis of development. Genome wide and targeted studies have identified, for example, the involvement of a number of genes of interest associated with inflammation, differentiation, proliferation, and apoptosis, among other processes which ionizing radiation is already known to impact. In addition, there are a number of biological dynamic models for cardiovascular disease (the topics in this paragraph reviewed in Tapio et al. [40]).

The most recent epidemiological evidence demonstrates increases in the probability of occurrence of these effects with dose, with no increase in severity; these are classical characteristics of stochastic radiation effects. However, the mechanisms are still highly unclear, and the low-dose effects, as well as the impact of dose rate, remain less studied [39, 40]. Recently, an adverse outcome pathway, an approach helps to assemble current knowledge on well-accepted critical events linked to disease progression, has been proposed for radiation-induced cardiotoxicity, which may help in structuring and simplification of the available mechanistic information and can facilitate predictive interpretations, beyond cellular or animal models, at the human population level (Fig. 8.12). This approach assists as well in identifying critical knowledge gaps for future research on radiation-induced cardiotoxicity, such as the need for an experimental model to understand low doses of radiation exposure and the need to understand epigenetic effects induced by radiation in the cardiovascular system [44].

Fig. 8.12
A schematic diagram of the cross-section of a human heart. It lists the respective cell types, key events, and adverse outcomes that lead to cardiovascular diseases.

Proposed cell types in the heart, key events and adverse outcomes that may contribute to cardiovascular disease. Not all potential cell types and key events are listed and some of the key events listed may be common across the different cell types. ECM extracellular matrix, MCP-1 monocyte chemoattractant protein-1, NO nitric oxide, PPAR alpha peroxisome proliferator-activated receptor (PPAR)-alpha, ROS reactive oxygen species. (Reproduced with permission from [44])

8.3 Radon and Health Effects

Radon and thoron are natural radioactive noble gases resulting from the decay of uranium and thorium, which leak from the soil in concentrations that depend on local geological conditions. Radon and thoron are chemically inert and electrically neutral, so at physiological temperatures there are no chemical interactions [45].

There are several natural isotopes of radon and thoron, originating from different series, as can be seen in Table 8.8 [46]

Table 8.8 Natural isotopes of radon and thoron (based on [46])

In the open air, the concentration of radon and thoron is normally very low, but being gases, they tend to accumulate in non-ventilated areas (WHO). In buildings constructed on soils that are rich in elements of the radioactive families mentioned above, their release causes them to accumulate inside houses. The pressure difference between the subsoil and the interior of the dwellings also favors this accumulation, due to diffusion.

Figure 8.13 shows the arithmetic mean of the annual indoor radon concentration per grid cell (10 km × 10 km) in ground floor rooms in some European countries.

Fig. 8.13
An European map of radon with the distribution of 8 arithmetic means and 2 other features. The radon concentrations are spread throughout the country.

European Indoor Radon Map: annual indoor radon concentration expressed as arithmetic means per 10 km × 10 km grid cells in ground-floor rooms. (Data received until December 2021 included; Reproduced with permission from European Commission. Joint Research Centre, EC-JRC, REM 2021)

As all radon and thoron isotopes are radioactive gases, after being released into the ambient air, they accumulate indoors and disintegrate into various unstable daughter nuclides. After decay in air, these radionuclides aggregate with other gases and water vapor, forming aerosols with diameters of 0.5–5 nm, which are easily inhaled, travel through the conducting airways and reach the alveoli of the lungs [47, 48]. However, as the airways are saturated with water vapor, the hydration of aerosols allows their diameter to increase up to about 10 times [49].

Once inhaled, the decay process occurs predominantly in the lungs. The main biological incorporation pathway is by inhalation of radioactive aerosols. Alpha emissions are the biggest contributors to the absorbed dose (about 90%) while beta and gamma emissions contribute only about 10% [47, 50,51,52]. Considering the aerosol dynamics, they are deposited according to three physical mechanisms: inertial impaction, sedimentation, and diffusion. Also taking into account the length, diameter and bifurcation angle of the airways, as well as the diameter of aerosols, deposition varies along the respiratory tract. Particles with larger diameters (2–50 μm) are deposited by inertial impaction in the nasopharynx, larynx, trachea and bronchi up to the third division. For particles with intermediate diameters (100 nm–10 μm) sedimentation is the main mechanism of deposition and occurs mainly in the lower respiratory tract, also in the bronchioles and even alveoli. For particles with diameters less than 200 nm, Brownian diffusion predominates and occurs in the alveoli, where gas exchange takes place [47,48,49]. Additionally, the multiple divisions of the airways and the consequent turbulence generated determine a non-homogeneous deposition pattern [47, 53].

Considering the different radiosensitivity of regions of the respiratory tract in which the mucosal and basal bronchial epithelial cells are particularly radiosensitive [54] as well as the multiple divisions of the conduction airways, the largest dose is deposited in the bifurcation of the trachea [47, 55].

Radon, thoron and their respective decay products emit alpha particles, beta- and gamma radiation, as mentioned above and the carcinogenic effect of these radionuclides is associated with the emitted ionizing radiation that can, directly or indirectly, damage DNA [56, 57]. This DNA damage causes mutations that can lead to carcinogenesis, resulting in the development of malignant tumors.

The correlation of radon with the incidence of lung cancer has been unquestionably proven by extensive epidemiological studies (BEIR VI). However, it is not excluded that it can also cause kidney cancer, melanoma, hematologic cancers, primary brain tumors, and even stomach, liver and pancreas cancers [56, 58,59,60]. However, given the low penetration of radon further than the respiratory system, the association with non-respiratory diseases is not proven [56].

Epidemiological studies on chronic radon exposure show that the estimated risk of carcinogenesis is related to the subject’s concentrations, exposure time, and age [47]. Concerning lung cancer and radon concentration, there appears to be an increased risk of 16% per 100 Bq/m3 [45, 47, 61]. With respect to mortality, there is a non-threshold linear correlation with exposure. If we add smoking to radioactive exposure, the risk of lung cancer increases even further [47, 61]. With regard to primary malignant brain tumors, there appears to be a positive correlation between chronic radon exposure and mortality [58, 61]. The same seems to be the case for non-cancer situations such as Alzheimer’s and Parkinson’s diseases, without, however, understanding the pathophysiological mechanisms [60]. There is a similar correlation for chronic radon exposure and incidence of chronic myeloid and lymphocytic leukemia and, in the case of children, with acute myeloid leukemia [45, 47, 61].

Latency times are highly variable between irradiation and the development of malignant tumors. Thus, for leukemia the times range from 5 to 7 years, while for solid tumors they are much longer, ranging from 10 to 60 years [45, 47, 61] (Box 8.2).

Box 8.2 Exposure and Risk of Radon Exposure

  • Radon and thoron are noble radioactive gases

  • There are several isotopes of radon and thoron

  • The decay process occurs in the lungs due to inhalation

  • There is carcinogenic risk associated to chronic radon exposure

8.4 Diagnosis and Medical Management of Radiation Syndromes

8.4.1 Introduction

Depending on the amount of energy deposited, the absorbed dose, as well as the radiation quality, significant whole-body or partial-body exposure to ionizing radiation may lead to acute clinical radiation effects resulting in an Acute Radiation Syndrome (ARS). This may be followed by Delayed Effects of Acute Radiation Exposure (DEARE) that take months and years to develop [62, 63].

Many aspects have to be considered regarding the diagnosis and management of radiation exposure. Regarding latency of occurrence, acute and chronic effects can be distinguished. The acute effects may require prompt diagnosis and immediate therapeutic intervention.

Considering the pathophysiological mechanisms, the effects can be distinguished as either deterministic or stochastic (see also Sect. 2.7.2). Deterministic effects are caused by radiation exposure exceeding a certain level (threshold) and are more severe with increasing dose. After whole-body irradiation, different categories of clinical syndromes can develop, usually depending on the absorbed dose: nausea, vomiting, diarrhea (NVD) syndrome (1–2 Gy), hematopoietic syndrome (2–6 Gy), gastrointestinal syndrome (>6 Gy) and central nervous syndrome/neurovascular syndrome (10–20 Gy); after local irradiation of the gonads, permanent sterility (0.1–6 Gy), of the eye opacity of lens (0.5 Gy), of the skin erythema (3–6 Gy) and hair loss (4 Gy) may develop [32, 34]. Clinical dosimetry based on the individual patient’s clinical signs and symptoms is important to define the severity of radiation exposure.

For stochastic effects, no threshold value is assumed, the probability of occurrence increases with radiation dose and even very low-dose exposure effects cannot be completely excluded [62].

8.4.1.1 External Contamination

In case of radionuclide contamination, establishing the presence of external contamination is very important since decontamination should be performed as soon as possible keeping people, equipment, and facilities safe in the process. However, urgent medical treatment has the highest priority as lifesaving always comes first. Luke-warm water and mild soaps should be used for the first line of decontamination. Reduction to background or at most 3× background dose should be aimed for. In case of residual contamination, peeling products can be used to remove contamination adherent to the skin. If measurements indicate persistent contamination, the presence of radioactive particles in the skin (like shrapnel, requiring surgical removal) or internal contamination should be suspected.

8.4.1.2 Internal Contamination

In case of radionuclide ingestion and/or inhalation, identification of the radionuclide is crucial to select the appropriate decorporation therapy. Decorporation therapy must be carried out as fast as possible in order to reduce radiation dose absorption, since pharmaceuticals are often most effective if given immediately or within 2 h after ingestion or inhalation. This can be achieved by using blocking agents, diluting agents, chelating agents or enhanced de-corporation drugs like Prussian blue, Zn-DTPA, Ca-DTPA and ammonium chloride [62, 64]. Physical decorporation measures such as gastric lavage for ingested radioactive substances (if applied within 2 h of ingestion) and bronchoalveolar lavage for large amounts of insoluble inhaled radionuclides could also be used [65].

8.4.2 Acute Radiation Syndromes

Acute radiation syndrome (ARS) develops when whole- or partial-body radiation exposure exceeds a certain dose, partially depending on individual radiosensitivity and radiation damage repair mechanisms. ARS is usually assumed to occur with whole-body doses above 0.5–1 Gy if given with high-dose rate [32, 66, 67].

The deposition of energy at the molecular and cellular level leads to physico-chemical-biological consequences already described in the previous chapters.

The clinical evolution of the acute radiation syndrome is sequential and its canonical evolution begins with prodromes, followed by the latent state, the state of manifest illness, and ends with the state of recovery or death [32, 62, 68].

In the prodromal state, the exposed person has non-specific symptoms, which is easily confused with a flu-like syndrome. Anorexia, nausea, vomiting, diarrhea and, eventually, erythema are frequent symptoms. The fluid loss that is caused by diarrhea may be accompanied by fever, hypotension, and headache, depending on its intensity [62, 68, 69]. Given the non-specific nature of these symptoms and signs, exposure to radiation may not be the first clinical hypothesis, which makes the information about the circumstances and awareness for radiological incidents very important. Prodromal symptoms and signs can appear at doses as low as 0.5 Gy, depending on individual radiosensitivity [62, 68]. This state lasts from a few minutes to a few days, depending on the dose: the higher the dose, the shorter the duration of this state. Except for people with increased radiosensitivity, the prodromal state may be absent or mild for whole-body doses of 1 Gy or less. If signs and symptoms appear within the first 2 h, this usually means an exposure dose greater than 2 Gy. In this case, the symptoms are predominantly gastrointestinal, and the patients may survive if adequately treated. At doses greater than 10 Gy, severe symptoms will develop, often within 5–15 min after exposure, predominantly cerebrovascular. A severe prodromal phase usually has a poor clinical prognosis that can lead to death [67, 69, 70].

Doses that are associated with prodromal symptoms and signs in approximately 50% of irradiated people are given in Table 8.9.

Table 8.9 Doses that are associated with prodromal symptoms and signs (reproduced with permission from [32, 68])

The aforementioned prodromal symptomatology, which appears at doses lower than 0.5 Gy up to about 3 Gy, seems to be dependent on damage of the cell membrane, with the consequent release of inflammatory molecules from the destroyed cells and to be mediated by the parasympathetic system [32].

The second phase of ARS is called the latent phase. In this phase, symptoms and signs diminish and may even disappear, in such a way that the patient feels better and appears to be recovered. In fact, injuries are developing, but the activated repair mechanisms can lead to complete (disappearance of symptoms and signs) or incomplete repair of the damage (reduction of symptoms and signs). The duration of the latent phase, which can vary from minutes to weeks, is also inversely related to the dose, that is, the higher the dose, the shorter its duration. Despite the absence of symptoms, it is in the latent phase that the most important consequences of exposure to radiation occur, leading to its effects, which are manifested in the manifest illness phase [69, 70].

If repair mechanisms are inefficient, the latent phase progresses to the next phase, the manifest illness phase. The absence of the latent phase, i.e., if the patient goes directly from the prodromal phase to the manifest illness phase, is an indicator that the dose was very high. In the manifest illness phase, there are specific symptoms and signs, depending on the organ or system mainly affected. However, there may be a mixture of symptoms and signs coming from different systems, which makes the diagnosis more complex. Also in this phase, the signs and symptoms, as well as the duration, are dose dependent, that is, the higher the dose, the earlier the symptomatology starts and the shorter the phase lasts, which can be from minutes to weeks [32, 66, 69, 70].

In this state specific syndromes are described, commonly classified as the hematological, gastrointestinal, and neurovascular syndromes, depending on dose (Table 8.10). In addition to these syndromes, skin lesions and lung toxicity may also develop (Fig. 8.14).

Table 8.10 Acute radiation syndromes
Fig. 8.14
A flow diagram exhibits the phases of the clinical evolution of acute radiation syndrome. The phases are promodal, latent, illness, and recovery or death. The illness includes hematopoietic, neurovascular, gastrointestinal, cutaneous, and pulmonary effects.

Scheme showing the phases sequence of the Acute Radiation Syndromes and examples of symptoms

The hematological or hematopoietic syndrome has as its target organs the hematopoietic organs, with special emphasis on the bone marrow. Generally speaking, the hematopoietic syndrome can develop from 1 Gy and the latency time varies from 2 to 3 weeks. Before a generalized failure of the hematopoietic system occurs, the progenitor cells of all linages have to be irreversibly damaged, which can happen with doses of at least 2 Gy. Without treatment, death may occur 3–8 weeks after exposure.

The characteristic signs and symptoms of this syndrome include general malaise, anemia, leukopenia and thrombocytopenia. The decrease in the number of circulating blood cells determines secondary symptoms such as dyspnea, asthenia, hypoxia, fever and purpura. If death occurs, it is mainly due to infections and/or hemorrhage [32, 66, 69, 70]. Treatment requires the use of cytokines, growth factors, antiemetics, antimicrobial agents (antibiotics, antifungals, antivirals), analgesics and in some cases anxiolytics can also be useful. Allogeneic stem cell transplantations should only be performed in specific circumstances (homogeneous whole-body dose, availability of perfectly HLA-matched stem cells).

If the radiation dose is higher, symptoms corresponding to the involvement of cells of the gastrointestinal system (gastrointestinal syndrome) appear, specifically the cells of the intestinal villi that are found in the mucosa of the small intestine. This syndrome can appear from a dose of 5 Gy with a latency time of 3–5 days. Complete loss of intestinal mucosa occurs at doses above 10 Gy and will be fatal within 3–14 days.

The characteristic signs and symptoms of this syndrome include general malaise, anorexia, nausea, vomiting, diarrhea, fever, dehydration, electrolyte loss and circulatory collapse, leading to death within a few days [32, 66, 69, 70]. Treatment requires adequate fluid administration, parenteral nutrition, growth factors, antiemetics, antimicrobial agents (antibiotics, antifungals, antivirals) and analgesics.

For higher doses of ionizing radiation, the neurovascular system is involved (neurovascular syndrome) appears, where glial cells may be damaged with doses of 1–6 Gy, lesions of the endothelial cells of the cerebral vessels that occur with doses of 10–20 Gy, or white matter necrosis that appears with doses in the order of 40 Gy or even demyelination that occurs with doses around 60 Gy. This damage will lead to signs and symptoms including lethargy, tremors, convulsions, ataxia, pre-coma and coma, leading to death within hours.

This neurovascular syndrome can develop from a dose of 20 Gy with a latency time of 30 min to 3 h. Death occurs within 2 days after doses above 50 Gy [32, 66, 69, 70]. Treatment is usually only symptomatic with analgesics and sedatives.

In addition to this syndromes, other important changes can occur in other organs, in response to exposure to ionizing radiation. One of these organs is the skin with the consequent cutaneous effects. Cutaneous effects are deterministic that only appear above a certain threshold dose. The first changes appear in the hair follicles at doses above 0.75 Gy. With higher doses other lesions appear. We can approximately summarize that epilation appears with doses of around 3 Gy, erythema with doses of around 6 Gy, desquamation with doses of 10 Gy, which appears associated with edema, meaning transepithelial lesion, with doses of 20 Gy.

Pulmonary effects, which appear over a huge range of doses (from about 5 Gy to doses as high as 50 Gy), are strongly dependent on the great vascular richness of the lung. In this context, we must mention the endothelial cells of the small pulmonary vessels, as well as the type II pneumocytes, alveolar cells that secrete surfactant, whose injury has enormous pulmonary functional repercussions. The pulmonary interstitium is also of special importance, as it responds with an intense inflammatory process to exposure to ionizing radiation, called radiation pneumonitis. This inevitably progresses to pulmonary fibrosis with big clinical impact.

As mentioned, the manifest illness phase has variable duration and can progress to the phase of death or recovery, depending on dose, dose rate and target organs. The recovery phase is associated with lower doses at which hematopoietic and/or gastrointestinal syndromes occur, especially if adequate medical treatment is carried out. If doses are high enough to induce neurovascular syndrome, the likely outcome is death. In this situation, death occurs few hours after irradiation and predominantly results from systemic vascular effects associated with multi-organ failure [70, 71].

8.4.2.1 Delayed Effects of Acute Radiation Exposure

Delayed effects of acute radiation exposure (DEARE) occur when there has been recovery after exposure, that is, when doses were lower, and hematologic and/or gastrointestinal syndromes were developed, after having been subjected to adequate medical treatment.

The delayed manifestations of these syndromes lose their acute expression and are associated with signs and symptoms that show the repercussion in various organs such as the lung, heart, kidney, central nervous system, beyond the bone marrow and gastrointestinal system, the target organs. Evolution results in the progressive failure of the organs involved, until death occurs. Given these characteristics, medical treatment is indicated with radioprotective drugs and/or radiomitigators of the effects of radiation which must be administered as soon as possible after the acute irradiation. This approach has the double goal of reducing the severity of the initial damage and the late onset-pathology [71, 72].

8.4.2.2 LD50 (Lethal Dose 50)

The concept of lethal dose (LD) is a pharmacological concept that can be applied to the consequences of exposure to ionizing radiation and is defined as the amount of dose of radiation that kills elements of an irradiated population. This broad concept can be further specified if we consider the dose that kills 50% of an irradiated population, which is called the median lethal dose (LD50) [70, 73].

The characteristics of the biological effects of radiation, namely the duration of latency time, associated with the great individual variability led to the refinement of this concept. Thus, there is often referred the LD50/30 (dose that kills 50% of the irradiated population in 30 days) or LD50/60 (dose that kills 50% of the irradiated population in 60 days). The LD50/60 for a healthy adult range between 2.5 and 3 Gy, while the LD50/30 ranges between 2.5 and 4.5 Gy. These values assume whole-body irradiation and the natural history of the disease, that is, the non-use of medical care, and are based on data from the survivors of Hiroshima and Nagasaki [34, 66, 74]. Although the LD50/30 and LD50/60 are similar concepts, they provide complementary and very important indications. In the case of somatic effects, the LD50/60 informs that if a survival for more than 60 days after an irradiation occurs, the recovery is expected.

For bone marrow, the LD50/60 ranges from approximately 3.5 to 4.5 Gy, however with supportive medical care, such as blood transfusions associated with antibiotic therapy, it can change to values between 5 and 6 Gy. With more robust treatments, such as the administration of hematopoietic growth factors, values from 6 to 8 Gy can be achieved [66] (Box 8.3).

Box 8.3 Acute Radiation Syndrome

  • Acute radiation syndromes appear after whole-body irradiation

  • After an irradiation, the biological consequences appear following four stages: Prodromal, latent state, manifest illness state, and of recovery or death state

  • When recovery occurs, delayed effects of acute radiation exposure can manifest

  • LD50 is the dose that kills 50% of the irradiated population after 30 days (LD50/30 or 60 days (LD50/60)

8.5 Methods of Triage for Treatment After a Radiation Accident

8.5.1 Introduction: The Need for Triage and Intro to Exposure Scenarios

When individuals are exposed to ionizing radiation in an accidental scenario, there is an urgent need to categorize exposed individuals not only in terms of the urgency of their need for treatment, but also in relation to ionizing radiation exposure. In general, radiation accidents lead to external radiation exposure and/or external or internal contamination with radionuclides. Exposures in situations requiring triage tend to be acute, but chronic exposures also need to be considered. Further exposure or contamination can be approximately homogeneous or highly heterogeneous. Hence the available tools and processes need to be flexible and sufficient to allow appropriate triage in a variety of potential situations. This section considers in the need for initial triage including decontamination, specific considerations related to radiological triage, as well as the need for late follow-up. Communication to the public is also a topic of importance, not covered in detail here, but with further information in the TMT Handbook [75].

8.5.2 Initial Triage: Trauma, Decontamination, and Other Considerations

In general, triage is used to screen the patients with severe injuries after a mass incident, including chemical, biological, radiological, nuclear, or explosive events (CBRNE). The first step during triage is to classify the affected person according to the type and severity of the suffered injuries, accurately assessing prognosis and survival expectancy, and to minimize the consequences of the event through the timely administration of first aid and/or treatment. After a catastrophic event the affected individuals can suffer from severe injuries, including tissue or bone trauma, thermal and/or chemical damage, in addition to ionizing radiation [75, 76].

Initial triage should be swift, simple, and based on universal guidelines, especially because it is often performed in a danger zone within the vicinity of the accident; further, triage will be initially based on the immediate threats to life and not on radioactive exposure and/or contamination. This cannot be understated; primary medical attention will always be aimed at dealing with immediate life-threatening conditions. The primary aim is to determine the transport priority of the victims to the hospital, screening the wounded in the area for later medical attention (Table 8.11). However, the classification of the injured and affected victims should be continuously re-evaluated, as the victims’ condition can change very quickly. There are several types of triages, for example, SALT (Sort, Assess, Lifesaving Interventions, Treatment/Transport); START (Simple Triage and Rapid Treatment—Adult), and JumpSTART (Simple Triage and Rapid Treatment—Children). These systems have four main color-coded categories [75, 78].

Table 8.11 Classification of victims of the radiation accident based on initial triage (e.g., START) [77] (reproduced with permission from the USDHHS Radiation Emergency Medical Management, https://chemm.hss.gov)

If people are exposed to radioactive material, they are swiftly screened in the triage by the first responders at the scene of the accident, i.e., paramedics, to assess the condition of the victims (Table 8.12). The aim of the triage system is to identify the victims with severe trauma and provide first aid and evacuation. The trauma triage system has three category priorities (P1–P3) (Table 8.13).

Table 8.12 Classification of externally irradiated individuals according to the received dose (reproduced with permission from [75])
Table 8.13 Classification of victims of the radiation accident based on trauma triage (reproduced with permission from [75])

Victims with trauma injuries should be identified first and medical attention for them is a priority (Fig. 8.15); however, if ionizing radiation exposure is an issue for both victim and first responder then the former must be moved from the area to reduce the dose rate. Contamination with radioactive material, both external and internal, is to be expected in these incidents, especially after an explosion. All the victims in the categories P1, P2 and P3 may be contaminated with radioactive material; therefore, triage for these individuals should be different. The victims sorted into category 1 are immediately transported to the hospital without prior decontamination, thus the medical staff has to be made aware of this fact. Serious injury is to be expected if the victims are sorted into category 2, although their evacuation can be delayed and thus decontamination should be performed before transport to the hospital, otherwise the hospital staff should be informed that decontamination has not taken place. The victims sorted into category 3 should be decontaminated at the site of the accident or given information on how self-decontamination should be performed and sent home. Decontamination for these victims is not performed at the hospital and medical treatment has lower priority than for category 1 and 2 victims (Table 8.14) [75, 79, 80]. It should also be noted that non-surviving victims of the mass biological, chemical, radiological or nuclear event are a potential and hazardous source of ionizing radiation.

Fig. 8.15
A flow diagram exhibits the steps to identify the trauma injuries. 1. Check for a physical injury. If not, exit trauma triage. 2. If yes, proceed with P 1 and go to the hospital. If not, proceed with P 2, followed by decontamination near the site. 3. Radiological triage at the hospital.

Schema for trauma triage. (Reproduced with permission from [75])

Table 8.14 Injury priority and decontamination (reproduced with permission from [75])

8.5.3 Radiological Triage

Following initial triage for trauma as described above, Category 2 and 3 victims should be monitored and further evaluated in the next steps of triage, comprising information about location at the time of the accident and/or radiological analysis based on clinical signs and symptoms to identify individuals who may have received doses high enough to cause deterministic effects.

As much information as possible should be collected regarding the type and energy or activity of the source and the dispersal of the radiation within the environment contributing to exposure/contamination. Regarding individual information, considerations include the time of direct contact or distance from the source when in proximity, whether the exposure took place within an enclosed or open environment, whether or not the individual was within the line of sight of the source, and if the source was mobile. Such information is needed for all potential points of exposure and can then be used to help prioritize individuals for treatment as described in more detail in the TMT Handbook [75]. In addition, such information can contribute to modelling of radiation exposure at the individual or population level, as considered in Chap. 4.

Section 8.4.2 describes the prodromal clinical signs and symptoms associated with approximate (>60%) whole-body ionizing radiation exposure, which can be used to estimate the radiation dose and the potential severity of ARS for Category 2 and 3 patients, as well as for any individuals identified through the location analysis to have doses high enough to potentially cause deterministic effects (Fig. 8.16). These individuals should be monitored for onset of nausea and vomiting, diarrhea and/or erythema.

Fig. 8.16
A line graph plots time and patient percent versus dose. The line for time begins at (2, 4.6), decreases gradually, and ends at (10, 0.8). The patient line begins at (2, 36), increases gradually, and ends at (10, 99). Values are estimated.

Relationship between time to onset of vomiting and dose between 2 and 10 Gy. (Reproduced with permission from [75])

In addition, differential blood cell counts should be taken, according to Fig. 8.17, at 8 h intervals on the first day and 12 h intervals on the second day, with decisions regarding later intervals to be taken according to the indicated severity of the complete blood count (CBC) suppression, the number of potentially exposed individuals and the available capacity. Where CBC indicates that significant doses have been received, or significant effects are expected, chromosome aberration analysis should be carried out to obtain a more concrete individual estimate of dose, as detailed in Sect. 8.6.

Fig. 8.17
A multi-line graph plots lymphocytes versus time. The y-axis ranges from 0 to 3000, and the x-axis ranges from 0 to 2. The values are plotted for normal, moderate, severe, and critical injuries. 4 lines follow a decreasing trend.

Lymphocyte depletion with dose and time post exposure, following whole-body doses exceeding 1 Gy. (Reproduced with permission from [75])

8.5.4 Internal Contamination

The next step in cases of internal contamination should be the swift assessment and sorting of the affected persons, mostly because decontamination efficiency decreases with time. Cases of internal contamination are recognized through a radiation survey, which can detect significant residual and localized (e.g., lungs, thyroid) or distant (e.g., urine, blood, smears, feces) radioactivity [81]. The initial monitoring of internal contamination victims cannot be done without special equipment, such as whole-body counters or thyroid uptake systems [82]. According to the TMT Handbook, the first set of actions during radioactive emergencies should be as follows:

  1. 1.

    Focus on life-threatening conditions, control of vital functions and hemorrhage, and transfer to emergency medical care facilities.

  2. 2.

    Perform external contamination monitoring and decontamination, if external contamination is detected.

  3. 3.

    Lower the risk of internal contamination and initial monitoring level of internal contamination for further treatment and prevent further contamination.

  4. 4.

    Based on the initial monitoring, early clinical judgement should be based on the risks and benefits of treatment of internal contamination by radionuclides. Medical professionals will determine if medical treatments are needed [75].

Internal contamination does not cause immediately serious or acute effects nor does it present a time-limiting life-threatening condition before the appropriate lifesaving and decontamination measures can be performed. Having said that, some radionuclides, such as 210Po or 137Cs, can cause massive internal damage or acute radiation syndrome within a few days after contamination [79, 83].

The victims should ideally have been externally decontaminated by the time of arrival at the medical facilities; if such is not the case the medical staff should be made aware about their condition and take the appropriate measures. The decontamination treatment of internally contaminated victims should start as soon as possible, especially if there is a risk of deterministic effects; however, accident history and dose estimation should be carefully considered (Table 8.15). The effectiveness of the treatment is determined by the early administration of radionuclide counteragents and the first aid provided, even if radioactive contamination is only suspected. The treatment administered should remove the contaminating radionuclides from the human body using chemical or biological agents, which may reduce their absorption, prevent their incorporation and internal deposition (e.g., chelating agents), or promote their elimination or excretion (e.g., lavage of the oral cavity, nose, conjunctival sac, stomach, use of laxatives or diuretics). In this regard, most methods of treatment for internal contamination with radionuclides include isotope blocking, dilution, or displacement, and the use of ion exchange resins, and ion mobilization or chelation (Table 8.16) [24].

Table 8.15 Action levels for treatment of radionuclide contamination (reproduced with permission from [75])
Table 8.16 Selected radionuclides and radiation countermeasures for treatment (reproduced with permission from [84])

The measures for internal decontamination are not used to treat acute radiation injury, but the main aim is to reduce the risk of stochastic effects like tumors induced by radiation in organs or tissues where radionuclides were incorporated.

8.5.5 Follow-Up and Recovery

Long-term medical monitoring should be carried out for the patients suffering from the clinical symptoms of acute radiation syndrome or local radiation injuries; however, asymptomatic patients should also be included in long-term follow-up as well as those for which there is only a presumed ionizing radiation exposure.

The patients with clear clinical symptoms should be scheduled for long-term follow-up to prevent, control, and care for the health consequences of ionizing radiation exposure. The long-term follow-up means that the patients will be checked in regular appointments in specialized clinical departments over a 5-year period to monitor any risk factors, health outcomes, or both. The long-term follow-up does not always have the same scenario and it is different on a case-by-case basis, mostly based on the development of symptoms of acute radiation syndrome and received dose of radiation. As an example, the logical first step for affected person is to contact and inform the primary care physician about the radiation exposure incident and plan a follow-up program with the physician and the specialists at various departments in the hospital (e.g., hematology, radiotherapy, psychology, internal medicine) if needed. If the affected patient recovers from the hematological consequences of acute radiation syndrome, a hematological examination should be conducted every 3 months during the first year and a routine medical examination once a year. Annual examinations at the ophthalmological clinic are also recommended for monitoring cataract incidence, if any. In addition, medical consultation should be offered to exposed victims for mental and reproductive health as and when needed [75]. The benefit of this long-term medical monitoring is the identification of radiation symptoms, and though it may sound daunting this follow-up does not differ much from that performed for other clinical conditions. It must be noted that those patients without symptoms could have the greatest benefit from long-term medical monitoring. This monitoring provides the capacity to classify the individuals at greater risk; further, it also enables the proper evaluation of diseases that may be found in the population at risk. Although it may be inconvenient for asymptomatic patients, long-term medical monitoring may help in early diagnosis and treatment of serious radiation-related illnesses, thus minimizing morbidity and mortality rates. Persons who have been exposed to low doses of ionizing radiation during a radiation emergency, who have not experienced ARS or other immediate symptoms associated with radiation exposure, should also be included in long-term follow-up and monitoring mostly to dismiss the existence of radiation effects or to monitor ionizing radiation exposure related illnesses, which often come in the form of cancer. In addition, the long-term follow-up may also provide the affected patients with mental health support and reproductive health consulting.

Taken together, the long-term follow-up and medical monitoring of the persons affected by radiation emergencies can provide new epidemiological data since medical follow-up data for potential stochastic effects such as cancer are sparse. However, social, economic, legal, and psychological aspects should be considered in the follow-up and monitoring of these patients [75]. The epidemiological follow-up determines two groups, i.e., exposed and unexposed to radiation, and registers any difference in the health outcome. How this will be done will depend on the exposure scenario. The typical outcome in radiation epidemiology is represented by a greater incidence of cancer or mortality related to radiation. The most precise and conclusive parameter in an epidemiological study is mortality due to clear and obvious occurrence, supported by the records available worldwide. Also, epidemiological follow-up studies should include non-malignant morbidities and mortality, which are known parameters collected from A-bomb survivors’ life span studies. However, it has to be mentioned that this is not always the main interest of epidemiological follow-up of health outcomes. In many cases there is an interest in diseases that can affect quality of life, such as nonfatal diseases, for example, tissue degenerative diseases [75, 85] (Box 8.4).

Box 8.4 Treatment of Internal Contamination

  • Initial triage for trauma, the victims should be monitored and further evaluated in the next step of triage, comprising information about location at the time of the accident and/or radiological analysis based on clinical signs and symptoms.

  • Internal contamination should be the swift assessment and sorting of the affected person, mostly because decontamination efficiency is often hindered by time delays.

  • Cases of internal contamination are recognized through a radiation survey, which can detect significant residual.

  • Internal contamination does not cause immediately serious or acute manifestations nor does it present a time-limiting life-threatening condition.

  • The treatment administered should remove the contaminating radionuclides from the human body through chemical or biological agents.

8.6 Biodosimetry Techniques

8.6.1 Introduction

Biological dosimetry is an internationally accepted method for the detection and quantification of presumed/suspected exposures to ionizing radiation in humans. On the basis of biomarkers in the peripheral blood, the amount of ionizing radiation to which an individual has been exposed can be determined and estimated. Biological dosimetry can be used in addition to physical dosimetry or as a distinct method for dose reconstruction. The traditionally used, well-established cytogenetic assays are predominantly based on induction and misrepair of radiation induced DNA double strand breaks. The analyses are performed in lymphocytes of the peripheral blood, as these circulate throughout the body, and they are normally in the G0/G1 stage of the cell cycle. Since lymphocytes are not cycling, they need to be stimulated to proliferate during in vitro cell culturing. There are several essential requirements for biological parameters to be meaningful dosimeters: Low background level, clear dose effect relationship for different radiation qualities and dose rates, specificity to ionizing radiation, non-invasive sample collection, fast availability of dose estimation, good reproducibility, as well as comparability of in vitro and in vivo results to set up a calibration curve [86].

8.6.2 Conventional Methods

8.6.2.1 Dicentric Chromosomes Assay (DCA)

The analysis of dicentric chromosomes (dic) (Fig. 8.18) with or without the inclusion of centric rings in lymphocytes of the peripheral blood is a well-established method for dose reconstruction after an acute exposure to ionizing radiation and therefore, considered as the “gold standard” in biological dosimetry [87]. After blood collection, lymphocytes are cultured at 37 °C for 48 h and stimulated to enter mitosis by using specific mitogens. During mitosis, chromosomes condense and become visible by light microscopy and dicentric chromosomes can be quantified. In Fig. 8.18 the formation of dicentric chromosomes is schematically presented (a) and a Giemsa stained metaphase cell is shown in (b) with a dicentric chromosomes and an accompanying fragment (ace). The dicentric chromosomes fulfill the essential requirements of a suitable biomarker for the detection of exposure to ionizing radiation. In particular, dicentric chromosomes are almost exclusively caused by ionizing radiation [88]. In healthy, non-exposed individuals, dicentric chromosomes rarely occur spontaneously and the background rate is about 0.5–1 dicentric chromosome in 1000 cells [89]. Therefore, the method has a good sensitivity. The lowest detectable dose of a homogeneous acute whole-body irradiation with low-LET (linear energy transfer) radiation is about 100 mGy when 500–1000 cells are evaluated [90]. It is well accepted that a comparable number of chromosomes damaged per dose unit is induced for both high- and low-LET radiation in vitro and in vivo [91] enabling dose estimation on the basis of in vitro calibration curves. The dose effect relationship can be modeled by a linear-quadratic curve (Y = c + αD + βD2) for up to 5 Gy of low-LET radiation and by a linear model (Y = c + αD) for high-LET (alpha or neutron) radiation [92]. In addition, based on the analysis of dicentric chromosomes, a distinction can be made between homogeneous and inhomogeneous or between high- and low-LET exposures [89]. The mean lifetime of lymphocytes with dicentric chromosomes in the peripheral blood is between 0.6 and 3 years [93], and this is largely influenced by both the absorbed radiation dose (low or high) and inter-individual variation in lymphocyte turnover rate [88]. The decline of dicentric chromosome bearing lymphocytes can occur either by cell death or by dilution of damaged lymphocytes with the fresh population of lymphocytes over long periods of post radiation exposure. Therefore, in the case of radiation exposure that occurred a long time ago or was protracted, i.e., over a longer period of irradiation at a low-dose rate, appropriate adjustments must be made to avoid underestimation of the dose. Chromosome analysis is considered as a very labor-intensive method requiring well-trained staff to perform the analyses [94]. To increase the throughput of the method for a large-scale accident with a large number of potentially exposed individuals, different approaches have been developed. Scoring 50 cells or 30 dicentrics has been accepted as sufficient in triage scoring to identify those who need immediate medical support [95]. Software-based automated scoring systems for the rapid detection of dicentric chromosomes have been developed and successfully applied in various studies (e.g. [96]). Recent advances in using imaging flow cytometry to identify dicentric chromosomes have demonstrated the feasibility, however, there is still much room for improvement [97]. Also automated robotically based high-throughput platform (RABiT, Rapid automated Biodosimetry Tool) has been designed to enhance the capacity of dicentric chromosome analysis [98].

Fig. 8.18
2 schematic diagrams. a. It demonstrates how the dicentric chromosomes are formed from the normal chromosomes via radiation-induced breaks. Acentric fragments are also formed. b. A structure of several chromosomes and fragments with dicentric and acentric.

(a) Schematic representation of the formation of a dicentric chromosome (dic) after exposure to ionizing radiation with the formation of a chromosome fragment without centromere (ace). (b) Giemsa stained metaphase spread of a human peripheral blood lymphocyte with a dic and ace

8.6.2.2 Cytokinesis-Block Micronucleus (CBMN) Assay

The analysis of micronuclei (MN) in binucleated (BN) cells of peripheral blood lymphocytes is an alternative cytogenetic technique used in biological dosimetry (Fig. 8.19). The assay originally developed by Fenech and Morley in 1985 restricted MN scoring to first division cells after inhibition of cytokinesis (cytoplasmic division) by cytochalasin B [99]. Micronuclei (MN) are small extranuclear bodies resulting from chromosome fragments or whole chromosomes that are excluded from mitotic spindle and therefore not included in the main daughter nuclei during cell division. Due to an elevated spontaneous frequency of micronuclei (0–40 MN/1000 BN cells) relative to dicentric chromosomes [89], the lowest detectable radiation dose of a homogeneous acute whole-body irradiation with low-LET based on micronuclei analysis is about 200–300 mGy when 500–1000 BN cells are analyzed [88]. The application of Fluorescence in situ hybridization (FISH) using a human pancentromeric probe can help in determining the origin of MN based on the presence (presumably whole chromosomes) and absence (chromosome fragments) of centromeric signal. It is well demonstrated that most radiation-induced MN are centromere negative [100]. Therefore, the sensitivity of the MN assay can be increased in the low-dose range by this method [101]. MN are less radiation specific than dicentric chromosomes and show greater variability both inter- and intra-individually. The rate of MN is influenced by age, sex, and lifestyle as well as exposure to other environmental mutagens [102]. The advantage of the method is the simple and quick evaluation, enabling relatively rapid training of inexperienced persons [103]. Various automated systems are available for MN analysis based on microscopy [104] or flow cytometry methods [105]. Furthermore, a high throughput and miniaturized version of the CBMN assay for accelerated sample processing has been described [106]. Several studies have confirmed the reliability of the automated MN assay for high-throughput population triage [107].

Fig. 8.19
4 micrographs of binucleated cells. 1 has a cell with no micronuclei, 2 has a cell with 1 micronucleus, 3 has a cell with 2 micronuclei, and 4 has a cell with 4 micronuclei.

Presentation of binucleated cells including 0, 1, 2 or 4 micronuclei

8.6.2.3 Chromosome Translocation Analysis Using Fluorescence In Situ Hybridization (FISH)

Fluorescence in situ hybridization (FISH) techniques have been in use for a number of years to identify translocations for the purpose of retrospective radiation dose assessment of radiation exposed victims [108] (Fig. 8.20). The technique relies on the use of chromosome-specific libraries of fluorescent probes to paint chromosomes in blood lymphocytes, in order to quantify the frequency of chromosome exchanges. In the simplest form of the assay, a cocktail of DNA probes for three human chromosomes labeled with a single fluorophore is used to estimate “genome equivalent” number of translocations based on the percentage of chromosomal material represented by the stained chromosomes. Typically, a cocktail of DNA probes for three human chromosomes covers at least 20% or more of the human genome. As translocations are not radiation specific and are predominantly stable within the genome, the expected background number of translocations [109] must then be subtracted from the observed number in the suspected irradiated sample. The genome equivalent rate of translocations is then translated to radiation dose by reference to a pre-determined dose response curve. The relative stability of translocations does, however, mean the assay can be used many years post exposure.

Fig. 8.20
A schematic diagram and a radiograph. a. It demonstrates how chromosomal translocation is formed from the normal chromosomes via radiation-induced breaks. b has radiated chromosomes, and the bright and dark shades have arrows.

(a) Schematic representation of the formation of a symmetrical translocation after radiation induced chromosomal breaks. (b) FISH painted metaphase spread of a human peripheral blood lymphocyte with translocations indicated by the arrows

In addition to this single color painting, genome wide analysis, known as M-FISH (which does not require adjustment for genome equivalent damage), allows the detection of all simple interchromosomal exchanges as well as complex rearrangements involving multiple breakpoints in several chromosomes. Use of chromosome specific multicolor band probe (mBAND) facilitates the assessment of intra-chromosomal rearrangements such as pericentric and paracentric inversions.

The FISH translocation assay has most commonly been used to estimate radiation doses following external radiation exposures [108]. As above, this technique is relevant for dose assessment at post-exposure time periods of days up to many years post exposure, however, does not work well for partial-body exposure.

The detection limit for FISH for uniform whole-body external low-LET exposures is on the order of 250 mGy, however, this varies depending on a number of factors, including the number of cells scored, age and smoking status (because translocations are not radiation specific), as well as length of time post exposure [109]. These issues, together with the length of time needed to culture the cells in order to visualize the aberrations, are the main limitations of the assay. Automation of FISH analysis is under development, but is not yet in common use.

8.6.2.4 The Premature Chromosome Condensation Assay (PCC-Assay)

Lymphocytes are sensitive to radiation and therefore use of both DCA and CBMN for exposure doses higher than 4 Gy is somewhat problematic. Especially in radiation accidents involving high doses of radiation, the premature chromosome condensation (PCC) assay can be of use in the quantification of radiation-induced chromosomal aberrations directly on unstimulated interphase blood lymphocytes [89]. Specifically, PCC induction in G0 lymphocytes isolated from whole human blood is mainly achieved by means of their fusion to Chinese hamster ovary (CHO) mitotic cells using the chemical polyethylene glycol (PEG) as a kind of fusogen [110]. PCC can also be induced in G2 cells by phosphatase inhibitors such as okadaic acid or calyculin A. Unlike cell fusion of unstimulated G0 lymphocytes, chemically induced PCC method requires stimulation of lymphocytes for one cell division because these chemicals induce premature condensation of G2 cells after DNA replication. The PCC method is suitable for the analysis of ring chromosomes, especially at higher doses [111]. To quantify radiation-induced chromosomal aberrations in G0-phase lymphocytes using the fusion PCC-assay, the total number of single chromatid PCC elements per cell in the exposed lymphocytes is recorded (Fig. 8.21a) and the yield of radiation-induced excess PCC fragments is estimated by subtracting the number of 46 PCC elements expected to be scored in non-irradiated lymphocytes (Fig. 8.21b). The dose assessment is based on a dose-response calibration curve generated by in vitro irradiation of unstimulated blood lymphocytes. These curves have a linear shape and the residual yield of excess fragments depends on the time elapsed for repair between the irradiation and the cell fusion Especially in radiation accidents where high doses are received the premature chromosome condensation (PCC) assay enables quantification of radiation-induced chromosomal aberrations directly on unstimulated interphase blood lymphocytes [89].

Fig. 8.21
2 schematic representations of the chromosome irradiated with and without gamma rays. A has several chromosomes and 14 P C C fragments. They are small or tiny. B has chromosomes and 46 single chromatid P C C fragments.

(a) Prematurely condensed single chromatid chromosomes following gamma irradiation to 4 Gy as visualized using the PCC assay and lymphocyte fusion to a mitotic CHO cell. Fourteen excess PCC fragments can be scored (shown by arrows). (b) Non-irradiated G0-lymphocyte PCCs demonstrating 46 single chromatid PCC elements. (Reproduced with permission from [112])

Overall, the fusion PCC assay allows rapid assessment of the radiation dose, even within 3 h post irradiation, and can successfully distinguish between whole- and partial-body exposures [113]. Furthermore, when the PCC-assay is combined with the fluorescence in situ hybridization (FISH) technique, inter- and intra-chromosomal rearrangements can be analyzed directly in G0 lymphocytes for radiation biodosimetry purposes and retrospective assessment of radiation-induced effects [114]. Finally, a quick, automatable, and minimally invasive micro-PCC assay was recently proposed for rapid individualized risk assessments in large-scale radiological emergencies [115]. However, the PCC assay requires the availability of either fresh or frozen mitotic cells [116] and expertise in cell fusion procedures and analysis of lymphocyte prematurely condensed chromosomes. Due to these limitations, the test is still not widespread.

8.6.3 Molecular Methods

8.6.3.1 Gamma-H2AX Foci Assay

The radiation-induced gamma-H2AX foci assay can be used to detect and quantify DNA double strand breaks indirectly using a phospho-specific antibody for the histone variant H2AX [30] (Fig. 8.22). In addition, the potential for rapid, high throughput, batch processing [117, 118] makes the foci assay ideal for early triage categorization to quickly identify patients who may be at risk of developing acute radiation syndrome and help prioritize the more established biodosimetry methods such as the dicentric assay.

Fig. 8.22
A schematic diagram and a microscopic image. a. It demonstrates how the phosphorylation of H 2 A X is formed at the damaged side from the normal D N A via double-strand breaks. B has non-irradiated and irradiated cells, with dots throughout the cell upon irradiation.

(a) Schematic representation of the formation of gamma-H2AX foci. Following radiation-induced DNA breakage, the free DNA ends are labeled by the phosphorylation of H2AX, which can be visualized and quantified using immunofluorescence antibodies. (b) Gamma-H2AX foci in human blood lymphocytes following exposure to 0 or 1 Gy X-rays following a post-exposure incubation for 1 h (40× magnification fluorescence microscopy images showing gamma-H2AX foci in green and DNA counterstain in blue)

The advantage of the gamma-H2AX assay is that a dose estimate, based on foci levels in peripheral blood lymphocytes, can be given within 5 h from the receipt of a blood sample [117]. Background levels of mean foci per cell are low, ~0.3 or less [119] and gamma-H2AX foci increase linearly with dose. However, foci loss follows the time course of DNA double strand break repair [120] and the time between radiation exposure and blood sampling greatly effects the observed yield of foci. To enable reliable exposure assessment, calibration curves for different post-exposure time points are essential [121]. The rapid loss of gamma-H2AX foci requires a blood sample to be taken within 1–2 days after a radiation exposure with the minimum detectable dose increasing from a ~ 1 mGy [122] for a sample taken within 1 h after exposure to ~0.5 Gy for a lag time of 2 days between exposure and sampling [123]. Use of two separate foci biomarkers, for example, gamma-H2AX and 53BP1 with dual-color immunostaining, could enhance the sensitivity for low-dose exposure by only scoring foci that coincide, so reducing the influence of staining artefacts [123]. Manual scoring of gamma-H2AX foci is the preferred method, as it gives smaller uncertainties in the dose estimate than automated scoring techniques [124] or flow cytometry [123]. However, gamma-H2AX flow cytometry imaging has the potential to be a very rapid, high throughput tool suitable for analyzing large numbers of samples [125]. The data analysis of foci counts for calibration curve fitting, estimating doses and calculating confidence intervals can be performed in the same manner as conventional chromosome dosimetry. Some evidence suggests the distribution of gamma-H2AX is Poisson among the scored cells and can be used to estimate partial-body exposure using the methods developed for the dicentric assay, although over-dispersion has been observed in other data sets [124].

Radiation quality, time, and dose-dependent changes in gamma-H2AX foci numbers need to be considered when converting foci yields into dose estimates. The rapid loss of foci following irradiation and other assay methodology influencing factors (e.g., sample shipment conditions, staining reproducibility), suggests that currently gamma-H2AX-based dose estimation may be associated with large uncertainties; especially if the exact time between exposure and blood sampling is unknown. Given this, the assays main function is that of a qualitative indicator of exposure as opposed to a precise dosimetry tool.

8.6.3.2 Gene Expression

One relatively new method for biological dosimetry is the analysis of changes in gene expression. In response to exposure to ionizing radiation, cells activate multiple transduction pathways to activate cell cycle arrest and induce DNA repair mechanisms in order to prevent the cell from apoptosis. These radiation-responsive alterations in the transcriptome can be quantified by molecular analysis, which lately have been exploited for biological dosimetry [126]. Global discovery platforms are initially used to search for appropriate marker genes that are useful for biodosimetric applications. The expression values need to be measurable in a relevant dose range and exhibit a linear dose-response relationship such that the level for the respective gene can be assigned to a specific radiation dose. Those studies focus mainly on human peripheral lymphocytes, which are also the material of choice for classical biodosimetry methods due to their sensitivity and specificity to ionizing radiation and the possibility of minimally invasive collection. The gene response can be monitored either by quantitative real-time polymerase chain reaction (qRT-PCR), which accurately quantifies single genes, or by microarrays that can show a global scale analysis [127]. Using ex vivo irradiated lymphocytes the sensitivity and linear dose dependency of this assay was assumed to be 100 mGy up to 5 Gy whole-body irradiation [128].

Currently, the use of gene expression analysis in dosimetry is still experimental. Several genes have already emerged as useful biomarkers, with ferredoxin reductase (FDRX) being the most promising one [129]. Due to the activation of a very complex molecular reaction by ionizing radiation, estimation of the dose based on single changes in gene expression is not optimal. Therefore, the use of multiple panels of radiation-sensitive genes is more promising to improve the accuracy of the estimation [87]. Currently, many researchers are working on the definition of such a gene signature in order to apply gene expression for biological dosimetry. There are also some studies that have identified genes for specific ARS effects [130].

The advantage of gene expression-based over other biodosimetric methods is rapid radiation dose estimation and high sample throughput, which is particularly advantageous for large populations. However, due to the dynamics of gene expression, dose estimation is only possible in a relatively short time frame after exposure. In addition, the influence of health status, age, and sex on changes in radiation-induced gene expression is known, and thus there is a need to develop individualized gene expression-based dosimetry models for different population subgroups. So far, it has also not fully been clarified how to infer from changes in gene expression to different radiation qualities and more complex exposure scenarios such as detection of partial-body irradiation. Although there is currently no universal standardization of gene expression analysis for biological dosimetry available, research is on going and the analysis of gene profiles seems to hold great potential to support the individual dose estimation especially in large-scale radiation accidents (Box 8.5).

8.7 Radiation Protection System/Risk Coefficients, Organ Weighting Factors, and Dose Limits

Box 8.5 Methods for Biological Dosimetry

  • In biological dosimetry biomarkers are used to verify exposure to ionizing radiation and to estimate the absorbed dose.

  • The analysis of dicentric chromosomes is considered as “gold standard” in biological dosimetry after an acute radiation exposure.

  • According to the radiation scenario other cytogenetic methods are available (CBMN, FISH; PCC).

  • Relatively new methods on the molecular level are gamma-H2AX foci assay and analysis of changes in the gene expression.

8.7.1 Introduction: History

Since the discovery and the rapid introduction and exploitation of ionizing radiation in medicine, technology and industry, the limited knowledge about the detrimental health effects of radiation led to the exposure of many individuals to high doses, and subsequently to various radiation exposure related health effects such as cancer [131, 132].

Guidelines for radiation protection purposes started as early as the 1890s with more detailed dose limits being released as more research was being published. However, as the detrimental effects of radiation became more known, and more research was being published about its negative side effects, the need for a cohesive set of guidelines and regulations became more apparent. At the second International Congress of Radiology, held in Stockholm in 1928, a new unit was proposed for quantifying ionizing radiation, specifically for the purpose of radiation protection. The unit was named Röntgen, after the discoverer of X-rays. It was also during this congress that the International X-Ray and Radium Protection Committee (IXRPC) was founded, which would later be known as the Commission for Radiation Protection (ICRP). The first dose limit recommendation by the IXRPC came in 1934. These stated that a person in normal health can tolerate 0.2 roentgens of X-rays per day. This would correspond to approximately an annual effective dose of 500 mSv; a dose 25 times higher than the current annual dose limit for occupational workers. No dose limit recommendation was given for γ-rays at this point.

Separate from the IXRPC, a document was published in the 1930s, outlining many protective methods and techniques to shield from the harmful effects of radiation. This report was commissioned by what would later become the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The comments on permissible dose were vague, however. In the following years, several recommendations were being made by various bodies. Around this time, terminology also began changing, and the previously used “tolerance dose” was changed to “maximum permissible dose.” In 1946, the US advisory Committee was re-established as the National Council on Radiation Protection and Measurements (NCRP) and amended their initial recommendations to now allow a maximum permissible dose of 0.05 Roentgen/day, expressed at that time as 0.3 Roentgen/week [133]. This reduction was largely due to the growing evidence of the hereditary harm of radiation. This new guideline was also echoed by the ICRP in 1950, when they also proposed a weekly maximum permissible dose of 0.3 Roentgen. The reduction from their previous 1934 recommendations of 0.2 Roentgen/day corresponded to 1 Roentgen/week, which was then seen as being too close to the threshold of adverse effects [134].

The first publication by the ICRP came in 1955; here a clear distinction was made between the levels allowed for public and occupational exposure, public exposure allowance was reduced by a factor of 10 from what was allowed for occupational exposure. Recommendations on permissible doses were given for various organs. New units were also introduced, with the rad (now corresponding to 0.01 Gy) being used for absorbed dose, and rem as the RBE weighted unit (corresponding today to 0.01 Sv) [135]. In 1958, the ICRP published what is now known as “Publication 1”. The concept of a weekly dose limit was abandoned, and the new annual occupational dose limit was 5 rem, with a public limit of 0.5 rem/year (50 and 5 mSv respectively) [136].

Changes to terminology and units were revised once again in 1977, in publication 26. The Sievert replaced the rem, and effective dose equivalent was introduced. More thought was also being placed on cost benefit assessment and the concept of radiation health detriment was introduced. Three general rules for the use of radiation were also introduced, justification, optimization and individual dose limitation. The term maximum permissible dose was replaced by dose limit; however, no changes were made to the guidelines, a dose limit of 50 mSv remained for occupational workers, with a public dose limit of 5 mSv [137]. In 1991, publication 60 reduced the occupational dose limit from 50 mSv to 20 mSv/year, averaged over 5 years. Public exposure was now limited to 1 mSv, with higher exposer levels being permissible as long as the annual average over a span of 5 years did not exceed 1 mSv. A radiation weighting factor was introduced, and the measure of dose equivalent was replaced by quantity equivalent dose. As there were now also tissue weighting factors for many more organs, effective dose equivalent was also replaced by the term effective dose [3]. The latest recommendations were issued in 2007 (publication 103) updating consolidating and developing additional guidance on the protection from radiation sources. One of the main characteristics of publication 103, is that it evolves from the previous process-based protection approach (practices and interventions) to an approach based on the exposure situation (planned emergency and existing exposure situations). The radiation and tissue weighting factors, effective dose and detriment are updated based on the most recent scientific data available. Finally, ICRP 103 focuses also on the radiological protection of the environment [34].

8.7.2 Organ Weighting Factors, Risk Coefficients, and Dose Limits

Ionizing radiation can have severe damaging effects in the human body. These harmful effects can be classified into two general categories: the deterministic effects and the stochastic effects. The deterministic effects are due to the killing or malfunction of cells after exposure to high radiation doses. The stochastic effects refer to either cancer or hereditary effects due to mutations of somatic cells or germ cells respectively.

The deterministic effects are manifested when the dose exceeds the dose threshold for a given effect [85]. These effects appear mostly after high irradiation doses. These thresholds are essential in preventing risk of morbidity in specific cell populations and overall mortality [32]. Tissues generally have different threshold dose baselines for these deterministic effects which depend on the radiosensitivity of the cells and the functional reserve of the tissue.

Depending on the absorbed dose and the type and energy of the radiation source, the equivalent dose (HT, mSv) for individual organs can be calculated. Equivalent dose (HT) is the absorbed dose, in tissue or organ T weighted for the type and quality of radiation R.

It is defined by the following equation:

$$ {H}_{T,R}={w}_R{D}_{T,R} $$
(8.1)

where DT,R is the absorbed dose averaged over tissue or organ T, due to radiation R and wR is the radiation weighting factor. wR is a dimensionless factor that correlates with the biological effectiveness of radiations of different qualities. The values wR as these are presented in ICRP 103 are shown in Table 8.17.

Table 8.17 Radiation weighting factors, as defined in the ICRP 103. All values relate to the radiation incident on the body or, for internal sources, emitted from the source (reproduced with permission from [34])

When the radiation field is composed of types and energies with different values of wR, the total equivalent dose, HT, is given by:

$$ {H}_T=\sum \limits_R{w}_R{D}_{T,R} $$
(8.2)

The stochastic effects are characterized for not having a known threshold and include cancer and hereditary disorders. The stochastic effects can represent a serious risk even at low doses of ionizing radiation, especially if the dose exceeds 100 mSv. The risk of induction represents the value of the effective dose absorbed in the whole organism. The effective dose is related to the health status detriment caused by stochastic effects. Because the tissues differ in their sensitivity to radiation, a tissue weighting factor (wT) has been determined. wT is the factor by which the equivalent dose in a tissue or organ T is weighted to represent the relative contribution of that tissue or organ to the total health detriment resulting from uniform irradiation of the body [34]. It is weighted such that \( \sum \limits_T{w}_T=1 \).

The effective dose (E) can be calculated as the sum of the weighted equivalent doses in all the tissues and organs of the body from internal and external exposure. It is defined by:

$$ E=\sum \limits_T{w}_T{H}_T=\sum \limits_R{w}_R{D}_{T,R} $$
(8.3)

where DT,R is the absorbed dose averaged over tissue or organ T, due to radiation R, wR is the radiation weighting factor and wT is the tissue weighting factor for tissue or organ T (Table 8.18).

Table 8.18 Tissue weighting factor (wT) values (reproduced with permission from [34])

The effects of ionizing radiation, mostly as a health hazard, have been studied for several decades. In this regard, the term nominal cancer risk coefficients has been introduced. These coefficients define the incidence probability of stochastic effects per radiation dose. The nominal risk coefficients depend on age, sex, averaged lifetime risk, among other radiobiological factors. In the twentieth century the nominal risk coefficient for cancer risk after exposure to ionizing radiation was estimated in 5.5% and 4.1% per Sievert (Sv) for the general population and for adult workers, respectively. The nominal risk for heredity damage was estimated in 0.2% and 0.1% per Sievert (Sv) for the general population and for adult workers, respectively (Table 8.19).

Table 8.19 Nominal risk coefficients for cancer and hereditary effects (10−2 Sv−1) (reproduced with permission from [34])

As previously mentioned, people are exposed to ionizing radiation from natural and artificial sources throughout their life. Which is why dose limits were implemented, seeking to prevent deterministic effects or to reduce the risk of stochastic effects. These dose limits are applicable only for situations of planned exposure and doses above the normal natural background radiation. However, these dose limits are not applied in the medical field so as to not hamper the effectiveness of diagnosis or treatment. Dose limits are applied into two main groups of exposed individuals: (1) occupationally exposed and (2) public (Table 8.20) [85]. Dose limits are strongly regulated to ensure that no one is exposed to an excessive amount of radiation in either normal or planned situations.

Table 8.20 Recommended dose limits in planned exposure situations (reproduced with permission from [34])

8.8 Exercises and Self-Assessment

  1. Q1.

    When radon is inhaled the largest dose is found at the level of the

    1. (a)

      Mouth

    2. (b)

      Bronchial wall

    3. (c)

      Bifurcation of the trachea

    4. (d)

      Alveoli

  2. Q2.

    When is the initial triage used?

  3. Q3.

    How many categories of priority are used in initial triage?

  4. Q4.

    What methods of treatment are used for internal contamination with radionuclides?

  5. Q5.

    In hematopoietic syndrome the death is mainly due to

    1. (a)

      Anemia

    2. (b)

      Dyspnea

    3. (c)

      Hemorrhage and infection

    4. (d)

      Thrombopenia

  6. Q6.

    In gastrointestinal syndrome the death is mainly due to

    1. (a)

      Anemia

    2. (b)

      Fever

    3. (c)

      Vomiting

    4. (d)

      Circulatory collapse

  7. Q7.

    The LD50 for humans is about without medial support measures is

    1. (a)

      1 Gy

    2. (b)

      4 Gy

    3. (c)

      10 Gy

    4. (d)

      50 Gy

  8. Q8.

    Discuss the most appropriate biodosimetric assay(s) for use in a suspected cases of radiation exposure which occurred approximately?

  1. (a)

    12 h ago

  2. (b)

    1 month ago

  3. (c)

    1 year ago

8.9 Exercise Solutions

  1. SQ1.

    c

  2. SQ2.

    Initial triage is used to screen the patients with severe injuries after a mass biological, chemical, radiological or nuclear event.

  3. SQ3.

    Four categories of priority.

  4. SQ4.

    Isotope blocking, dilution, or displacement, and the use of ion exchange resins, and ion mobilization or chelation.

  5. SQ5.

    c

  6. SQ6.

    d

  7. SQ7.

    b

  8. SQ8.

    The data in Sect. 8.6 should be referred to in order to formulate a full answer, based on the scenario of exposure. However, the short answers are: (a) gamma-H2AX and dicentric assays; (b) dicentric or CBMN assays; (c) FISH translocation assay.