Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

8.1 Introduction

In the previous chapters, we have seen that the detection of ionising radiation in the end nearly always comes down to detecting some small electrical signal. Dealing with such small signals is one of the main challenges in designing detectors for nuclear physics and particle physics. Photomultiplier tubes and gas amplification detectors such as Geiger tubes are often used because of their built-in signal amplification mechanism and therefore larger electrical pulses. However, in many detector types there is no such built-in amplification mechanism.

In the present chapter, we will explain the basics of nuclear electronics and discuss the main sources of noise. In Sect. 8.2, we will briefly discuss some important concepts of signal theory. These concepts will be needed throughout the rest of the chapter. I do assume that the reader has at least a basic knowledge of general electronics and circuit theory. In particular, I assume that she or he is familiar with the concept of complex impedance.

A detector in nuclear electronics is always some device with a large resistance. The interaction of ionising radiation induces a small electrical current. From the electrical point of view, a detector is a current source with a large internal resistance and a small capacitance. This is illustrated in Fig. 8.1. Also, in the absence of any ionising radiation there is a small current, which is called the dark current or leakage current depending on the physical mechanism causing it.

Fig. 8.1
figure 1

In the electric circuit the detector behaves like a current source with a capacitance and an internal resistance. The two intersecting circles represent a current source

There are basically two different modes for measuring nuclear detector signals: current mode and pulse mode. In the current mode, one simply measures the total current of the detector and ignores the pulse nature of the signal. This is simple, but does not allow advantage to be taken of the timing and amplitude information that is present in the signal. In the pulse mode, one observes and counts the individual pulses generated by the particles. The pulse mode always gives superior performance but cannot be used if the rate is too large.

In many detectors the amplitude of the pulses is proportional to the initial charge signal and the arrival time of the pulse is some fixed time after the physical event. By using appropriate thresholds, one can select and count only those pulses that one wants to count. Often the ‘good events’ are characterised by some specific signal amplitude or by the simultaneous presence of two (or more) signals in different detectors. Sometimes also the ‘good events’ are characterised by the absence of some other signal. Finally, in the pulse mode, one can register a pulse height spectrum and such a spectrum contains a large amount of useful information.

The basic principle of pulse counting is illustrated in Fig. 8.2. The electronics has a threshold that should be well above the noise present in the signal. If the signal is less than the threshold, the output of the circuit is ‘zero’. As soon as the signal level exceeds the threshold, the output of the circuit is ‘one’. The words ‘zero’ and ‘one’ should not be understood as meaning actually zero volt and one volt, but rather as voltage levels that have the meaning ‘zero’ and ‘one’. The number of events can now be obtained with some simple counting circuit. In this process, one should be aware that the setup could be inefficient. It means that sometimes a real event in the detector does not produce a pulse that is large enough to produce a signal exceeding the threshold, or a suitable signal was produced, but the electronics did not recognise the pulse because it was arriving at the same time as some other event. This last effect is referred to as dead time.

Fig. 8.2
figure 2

A discrimination circuit has an analog input signal and a digital output signal. If the input signal exceeds some fixed threshold, a digital output signal is generated

To obtain a pulse height spectrum the electronics will search for the maximum of the signal in some pre-defined window around the pulse and the value of this maximum is digitised and sent to a computer. The computer stores the values for the maxima of a large number of pulses and displays the result as a histogram.

To see if an event occurred simultaneously with some other event, the electronics will look for the simultaneus presence of two logical signals within some time window as is illustrated in Fig. 8.3. In coincidence counting, one should be aware of the possibility to have random coincidences. These are occurences of a concidence caused by two unrelated events arriving by chance at the same time. It is easy to see that the rate of random coincidences between two signals is proportinal to the rate of each type of signal times the duration of the coincidence window.

$$\dfrac{{dN_{\rm random} }}{{dt}} = \dfrac{{dN_1 }}{{dt}} \dfrac{{dN_2 }}{{dt}}\,\Delta t$$
Fig. 8.3
figure 3

A coincidence circuit has two digital input channels and one digital output channel. If the two input signals have some overlap in time, the two signals are said to be in coincidence and a digital output signal is generated

The main challenge in detector electronics is distinguishing the small signals from the noise. In detector systems, noise is any random signal that is not due to the physical process one intends to measure. If the noise is only present at the end of the electronics readout chain it is not a problem. One only needs to amplify until the signal is larger than the noise. However, if the noise is already present at the front-end part, at the level of the detector itself, amplification does not help since the noise is also amplified.

There are many possible causes of noise. Some of these can be reduced to arbitrarily low levels by careful design of the measurement system. A good example of such reducible noise is the pick-up noise. Some of the ubiquitous electromagnetic radiation can be captured by the front-end part of the measuring device, is amplified and is present as noise in the output signal. This noise can be due to external devices unrelated to the measuring system being used, but is often caused by the electronics of the detector itself. The digital part of the electronics and the readout computer are often sources of noise. Some level of pick-up noise is nearly always present in the measurement systems. One of the main technical difficulties in designing nuclear electronics is keeping the pick-up noise under control. The main method by which to achieve this is by enclosing the detector in a Faraday cage. A Faraday cage is simply a box made out of a good conductor, usually copper. A Faraday cage is very effective in suppressing pick-up noise. However, there are always lines entering the Faraday cage, for example, power lines or signal output lines, and particular care must be taken to avoid noise from entering the cage with such lines. Some commonly used methods to achieve this are illustrated in Fig. 8.4.

Fig. 8.4
figure 4

Some methods commonly used for preventing the noise from entering a Faraday cage through signal cables or power supply lines

Consider the high-voltage input line shown in Fig. 8.4. One can think of the noise as an unwanted pulse travelling on this line. The problem with pick-up noise is usually with high-frequency signals and we therefore assume that we need to suppress high-frequency signals. If no protective measures are taken, a noise signal on the high-voltage line will arrive on the electronics board. Stray capacitances on the electronics board will inject a small fraction of this noise pulse into the input of the amplifier. In Fig. 8.4, we show how this can be avoided. If the high-voltage line is connected to the Faraday cage by a large capacitance, the amplitude of the noise pulse is attenuated in the ratio of the impedances. With a proper choice of the values of R and C, this strongly suppresses the noise.

$${\rm{Noise\,\,attenuation}} = \left| {\dfrac{{\dfrac{1}{{j\omega C}}}}{{R + \dfrac{1}{{j\omega C}}}}} \right| \approx \dfrac{1}{{\omega RC}}.$$

Noise can also enter the Faraday cage through the signal output lines. One of the many possible ways to avoid this problem is illustrated in Fig. 8.4. In this solution, we use a differential output line. This means the signal and its opposite are sent on the two different lines. These two signals then go through two self-inductances that are wound together with the windings in the same direction. Because the currents in the differential line are always opposite, these coupled self-inductances have no effect on the signal. But any pick-up noise is the same on both lines, so for the noise this is seen as a real self-inductance with a large impedance. If we connect one side of the self-inductance to ground with a resistor that is not too large, we again attenuate any noise signal entering the Faraday cage. Usually the noise filters are not connected to the wall of the Faraday cage as shown on Fig. 8.4, but to the ground plane of the electronics board. This ground plane itself is connected to the Faraday cage by a low-impedance connection.

Other sources of noise can never be completely eliminated. It is necessary to understand these sources of noise and to minimise their influence on the measurement. The main sources of irreducible noise are the thermal noise of the resistors and the shot noise. The main task in designing nuclear electronics is optimising the signal-to-noise ratio and making the correct compromises for this. The designer of the detectors must also understand the implications of this to find the best detector for the problem at hand. Sections 8.4, 8.5 and 8.6 are devoted to a study of these noise effects. The discussion here follows the presentation of this subject in [1, 2].

Before we go into a more detailed analysis, I want to point out the basic reason why electronic amplification is always accompanied by noise. Consider the amplifier schematically represented in Fig. 8.5.

Fig. 8.5
figure 5

Schematic representation of a detector and its amplifier

The detector generates some small current and with the capacitance of the detector this determines the voltage seen at the input of the amplifier. This voltage modulates the resistance of an amplifying device (usually a transistor) and this change in resistance changes the current in the output circuit and gives an output voltage over the load resistor. As we will show later, a resistor always has noise for fundamental physical reasons; therefore, this amplifier unavoidably introduces noise. We need to optimise things in such a way as to minimise the effect of this noise. It is immediately clear from the above that the capacitance of the detector should be kept small. The signal-to-noise ratio does not degrade inversely proportional to the capacitance as the above argument seems to suggest, but rather inversely proportional to the square root of the capacitance. The reason for this will become clear later.

8.2 Impulse Response and Transfer Function

To present a quantitative discussion of the electronic noise, we need some elements of signal theory and in particular the concepts of impulse response and transfer function. These concepts are introduced in the present section. Any amplifier, and more generally any electronic circuit, has an input impedance and an output impedance, as illustrated in Fig. 8.6.

Fig. 8.6
figure 6

The input of an amplifier behaves as an impedance and the output as a voltage source with an impedance in series

This means that, if the input of the amplifier is part of some electronic circuit, it will behave as impedance Z in. Similarly, if the output of the amplifier is part of some electronic circuit, it will behave as an impedance Z out and a current or voltage source. Note that these impedances are in general complex, frequency-dependent, functions.

We now need to introduce the concept of `linear circuit’ and discuss the main properties of such circuits.

Definition of a linear circuit (see Fig. 8.7)

Fig. 8.7
figure 7

A linear circuit has an input and an output line. It is linear if the signals satisfy the properties listed in the text

If V out1 is the output signal corresponding to the input signal V in1

If V out2 is the output signal corresponding to the input signal V in2

A circuit is linear if and only if the following property holds:

For any two arbitrary input signals V in1 and V in2, to the input signal (V in1 + V in2), corresponds the output signal (V out1 + Vout2).

Linear circuits are important, because they have simple mathematical properties. Most circuits used are therefore linear circuits. Any network of resistances, capacitances and self-inductances is a linear circuit. However, not all commonly used circuits are linear, for example a circuit with a diode is not a linear circuit.

Of course, a linear circuit is only linear in a certain range of signal amplitudes. For example, it could be linear only for positive signals of less than 5 V. However, if we make sure we only use the circuit in the linear range, we can safely apply all the results that are valid for linear circuits.

We now need to introduce the important concepts of ‘impulse response’ and ‘transfer function’.

Impulse response. The impulse response h(t) is the response of a system to a delta function like input pulse.

$$V_{\rm in} (t)\,=\,\delta (t)\qquad\quad V_{\rm out} (t) =\,h(t)$$

Transfer function. The transfer function H(ω) is the Fourier transform of the impulse response.

$$H(\omega ) = \dfrac{1}{{\sqrt {2\pi } }}\int {e^{ - j\omega t}\,h(t)\,dt}$$

An amplifier, and more generally any electronic measurement system, deforms the input signal. If a delta function like voltage pulse is applied at the input of the system, the output pulse is not a delta function, but is a pulse with a finite width and usually is amplified or attenuated.

In this chapter, we follow the usual convention that ‘j’ is used to denote the complex number ‘i’. In our notation, we hence have j 2 = −1. The conventions used in this text for the Fourier transform are made clear by the equations below:

$$\begin{array}{l} h(t) = \dfrac{1}{{\sqrt {2\pi } }}\displaystyle\int {e^{ + j\omega t} } H(\omega )d\omega \\\noalign{} H(\omega ) = \dfrac{1}{{\sqrt {2\pi } }}\displaystyle\int {e^{ - j\omega t} }\,h(t)dt \\ \end{array}$$

With this notation, the well-known properties of the delta function are written as

$$\delta (t) = \dfrac{1}{{2\pi }}\int {e^{j\omega t} d\omega}\quad\qquad f(a) = \int {f(t)\delta (t - a)}\,dt$$

And the inverse Fourier transform of the delta function is

$$\dfrac{1}{{\sqrt {2\pi } }}\,=\,\dfrac{1}{{\sqrt {2\pi } }}\int {e^{ - j\omega t} \delta (t)\,dt}$$

It is now easy to prove the following properties of the impulse response and the transfer function for linear circuits.

  1. (1)

    From the fact that h(t) is real, it immediately follows that \(H(\omega ) = H^*({-} \omega )\)

  2. (2)

    The transfer function H(ω) for ω = 0 equals the integral of the impulse response

    $$H(0) = \dfrac{1}{{\sqrt {2\pi } }} \int\limits_{ - \infty }^{ + \infty } {h(t)\,dt}$$
  3. (3)

    A perfect circuit without distortion and unit gain has \(H(\omega ) = \dfrac{1}{{\sqrt {2\pi } }}\). Obviously, no amplifier is perfect up to infinite frequency and above some value of the frequency the absolute value of H(ω) will drop to zero.

  4. (4)

    If a pulse V in(t) with Fourier transform V in(ω) is applied to the input of a linear circuit, the Fourier transform of the output is given by

    $$V_{\rm out} (\omega ) = \sqrt {2\pi } \,V_{\rm in} (\omega )\,H(\omega )$$
  5. (5)

    For an arbitrary input signal, the output signal in the time domain can be obtained as follows

    $$V_{{\rm{in}}} (t) = \int\limits_{ - \infty }^{ + \infty } {V_{\rm in} (t^{\prime}) \delta (t^{\prime} - t)\,dt^{\prime}}$$

    Because the circuit is linear, the output signal is given by

    $$V_{{\rm{out}}} (t) = {\rm gain}\,\int\limits_{ - \infty }^{ + \infty } {V_{\rm in} (t^{\prime})\,h(t - t^{\prime})\,dt^{\prime}} $$

    There is a small difficulty here in that δ(t' – t) = δ(t – t'). But the two expressions

    $$\int\limits_{ - \infty }^{ + \infty } {V_{\rm in} (t^{\prime}) h(t^{\prime} - t) dt^{\prime}\quad {\rm{and}} }\quad \int\limits_{ - \infty }^{ + \infty } {V_{\rm in} (t^{\prime}) h(t - t^{\prime}) dt^{\prime} }$$

    are not the same. The impulse response h(t) must be zero for all values t<0, otherwise there would be an output signal for an input signal that has not yet arrived. If we use the first integral, the output signal at time t depends on the part of the input signal coming after the time t. This makes no sense, therefore the second possibility should be used.

    Properties 4 and 5 allow us to obtain the response of the system to an arbitrary input function from the knowledge of the impulse response and the transfer function.

  6. (6)

    If a sine wave V in(t) = sin(ωt) is applied at the input of the system, the output is given by (see Exercise 1):

    $${\rm{V}}_{{\rm{out}}} (t) = \sqrt {2\pi } \left| {H(\omega )} \right|\,\sin (\omega t + \phi (\omega ))$$

    In this expression |H(ω)| and φ(ω) are the modulus and the phase of the transfer function

    $$H(\omega ) = \left| {H(\omega )} \right| e^{j\phi (\omega )}$$

    This result gives us an intuitive feeling of what the transfer function is. If a sine wave is applied at the input of a circuit, the output is also a sine wave, but the amplitude is proportional to |H(ω)| and the phase of the output sine wave relative to the input sine wave is given by φ(ω).

  7. (7)

    An ideal voltage amplifier, or operational amplifier, is an amplifier with infinite input impedance and zero output impedance. It is often represented by a triangle (see Fig. 8.8). If an electronic circuit is composed of two parts coupled by an operational amplifier, the transfer function of the complete system is given by

    $$H(\omega ) = \sqrt {2\pi }\,H_1 (\omega )\,H_2 (\omega )$$

    This property is very useful when designing complex circuits. Indeed, an electronic circuit is often made up of a number of elementary sub-circuits connected by operational amplifiers.

  8. (8)

    Detectors in nuclear electronics are current sources. We therefore need to know the response of a system to a delta function like current pulse. This is the ‘current impulse response’ \(\hat{h}(t)\). Similarly, the Fourier transform of the ‘current impulse response’ is the ‘current transfer function’ \(\hat{H}(\omega)\). Where necessary we will use a ‘^’ on top of the symbol h(t) or H(ω) to distinguish the two different kinds of impulse response and transfer functions.

    Assume a system with an input impedance Z in(ω); if delta function like current pulse is applied at the input, the Fourier transform of the input voltage is given by

    $$V_{\rm in} (\omega ) = Z_{\rm in} (\omega )\,I(\omega ) = Z_{\rm in} (\omega )\,\dfrac{1}{{\sqrt[{}]{{2\pi }}}}$$

    Using property 4, we immediately get the following relation between the ‘transfer function’ and the ‘current transfer function’ (see Fig. 8.9).

    $$\hat H(\omega ) = H(\omega )\,Z_{\rm in} (\omega )$$

    and therefore

    $$V_{\rm out} (t) = \dfrac{1}{{\sqrt {2\pi } }}\int {e^{j\omega t}\,H\left( \omega \right)}\,Z_{\rm in} \left( \omega \right)\,d\omega$$

    In the important case where the input impedance is a real and frequency independent constant Z, we have \(\hat H(\omega ) = Z\ H(\omega )\) and \(\hat h(t) = Z\ h(t)\) and the difference between the two kinds of transfer function is only a difference in gain. If we consider normalised impulse responses or transfer functions, the two types of function therefore become identical.

  9. (9)

    The Parseval identity is a relation between any function f(t) and its Fourier transform H(ω)

    $$\displaystyle\int\limits_{ - \infty }^{ + \infty } {\left| {h(t)} \right|^2 dt = } \displaystyle\int\limits_{ - \infty }^{ + \infty } {\left| {H(\omega )} \right|^2 d\omega }$$

    From this identity we immediately obtain two useful properties of the transfer function

    $$\begin{array}{l} \displaystyle\int\limits_0^{ + \infty } {|H(\omega )|^2 d\omega } = \dfrac{1}{2} \displaystyle\int\limits_{ - \infty }^{ + \infty } {h^2 (t) dt} \\\noalign{} \displaystyle\int\limits_0^\infty {\omega ^2 |H(\omega )|^2 d\omega } = \dfrac{1}{2} \displaystyle\int\limits_{ - \infty }^{ + \infty } {\left( {\dfrac{{dh(t)}}{{dt}}} \right)^2 dt} \\ \end{array}$$

    The second equation is derived using Parseval’s identity and the fact that the Fourier transform of the derivative of h(t) is given by jωH(ω).

Fig. 8.8
figure 8

A shaping circuit often consistss of a succession of shaping networks connected by amplifiers

Fig. 8.9
figure 9

Response of the system to a delta function like current pulse

We will now illustrate the power of these methods with two simple examples. These examples correspond to very simple and commonly used circuits.

The integrator or low pass filter. The circuit shown in Fig. 8.10(a) is a low-pass filter. You should imagine that this circuit is connected on the left-hand side to a voltage source, hence to a circuit with zero impedance. In practice, this voltage source will usually be an amplifier. You should also imagine that the output voltage is measured with an ideal voltage meter, hence with a circuit with infinite input impedance.

Fig. 8.10
figure 10

Two very simple circuits that are commonly used

Use Ohm’s law written in the frequency domain: V(ω) = Z(ω) I(ω). To calculate the impedance of a circuit, use the familiar rules to combine resistors in series or in parallel, but for any self-inductance L, consider it to be a complex impedance = jωL and for any capacitance C, consider it to be a complex impedance \(\dfrac{1}{{j\omega C}}\). In this way we readily find

$$V_{\rm in} (\omega ) = \left[R + \dfrac{1}{{j\omega C}}\right] I(\omega )$$

A second application of Ohm’s law gives a relation between the Fourier transforms of the input and output voltages.

$$V_{\rm out} (\omega ) = I(\omega ) \dfrac{1}{{j\omega C}} = \dfrac{{V_{\rm in} (\omega )}}{{\left[R + \dfrac{1}{{j\omega C}}\right]}} \dfrac{1}{{j\omega C}} = V_{\rm in} (\omega ) \dfrac{1}{{1 + j\omega RC}}$$

If the input voltage is a delta function like voltage pulse, \(V_{\rm in} (\omega ){\rm{ = }}\dfrac{1}{{\sqrt {2\pi } }}\) and from the definition of the transfer function we immediately find

$$H(\omega ) = \dfrac{1}{{\sqrt {2\pi } }} \dfrac{1}{{1 + j\omega RC}}$$

Taking the Fourier transform of the above transfer function, one finds for the impulse response

$$\left\{ {\begin{array}{ll} {h(t)\, = \,\dfrac{1}{{RC}}e^{\frac{t}{{RC}}} } \hfill & {t \ge 0} \hfill\\\noalign{} {h(t)\, = \,0} \hfill & {t < 0} \hfill\\\end{array}} \right.$$

The actual calculation of the Fourier transform is rather involved, but this result can be found in any handbook with tables of Fourier transforms. In addition, it is not necessary to go through the Fourier transform calculation. The same result can be obtained quite simply by approximating the delta function by a square pulse with \(\Delta {\rm{V}} \times \Delta {\rm{t}} = {\rm{1}}\). Taking the limit for \(\Delta {\rm{t}} \to {\rm{0}}\), one readily obtains the result above. Indeed, the voltage pulse of amplitude ΔV and duration Δt first charges the capacitor and immediately after the pulse the voltage over the capacitance is 1/RC. This voltage decays, because the voltage source at the input side has zero output impedance. This decay is exponential with decay constant RC.

To obtain the response of the low-pass filter to an arbitrary and time-dependent input voltage, we use property (5) of the transfer functions

$$\begin{array}{l} V_{\rm out} (t) = \displaystyle\int\limits_{ - \infty }^{ + \infty } {V_{\rm in} (t^{\prime}) h(t - t^{\prime}) dt^{\prime}} \\\noalign{} V_{\rm out} (t) = \displaystyle\int\limits_{ - \infty }^t {V_{\rm in} (t^{\prime}) \dfrac{{e^{ - \frac{{t - t^{\prime}}}{{RC}}} }}{{RC}}\,dt^{\prime}} \\ \end{array}$$

For a more intuitive derivation of this last result, consider the situation illustrated in Fig. 8.11. The input pulse can be seen as a sum of square pulses of duration Δt and amplitude V in(t'). The output pulse corresponding to each of these input pulses is just the impulse response with a weight [V in(t') Δt]. The total output at some point in time t is just the sum of all the preceding small pulses. In the limit Δt → 0, this sum becomes an integral and we recover the result above.

Fig. 8.11
figure 11

This figure illustrates the relation between the input voltage and the output voltage for an integrator circuit. The input signal can be seen as a sum of short square pulses, each causing the impulse response as an output signal. The total output signal is the sum of all prior input pulses

If the duration of the pulse is short compared to the time constant RC, we have

$$V_{\rm out} (t) = \dfrac{1}{{RC}} \int\limits_{ - \infty }^t {V_{\rm in} (t^{\prime})\,dt^{\prime}}$$

This explains the name integrator.

The differentiator or high-pass filter. The differentiator or high-pass filter is illustrated in Fig. 8.10(b). Repeating the same calculation as above for the high-pass filter, one finds

$$H(\omega ) = \dfrac{1}{{\sqrt {2\pi } }}\dfrac{{j\omega RC}}{{1 + j\omega RC}} = \dfrac{1}{{\sqrt {2\pi } }}\left( {1 - \dfrac{1}{{1 + j\omega RC}}} \right)$$

Notice that this transfer function simply is the constant \(\dfrac{1}{{\sqrt {2\pi } }}\) minus the transfer function of the low-pass filter. Therefore, its Fourier transform is readily obtained from the previous calculation. It is a delta function minus the impulse response of the low-pass filter. The impulse response of a high-pass filter is therefore given by

$$\left\{ {\begin{array}{*{20}c} {h(t) = \delta (t) - \dfrac{1}{{RC}}e^{ - \frac{t}{{RC}}} } \hfill & {t > 0} \hfill \\\noalign{} {h(t) = \delta (t)} \hfill & {t < 0} \hfill \\\end{array}} \right.$$

And the response to a voltage pulse is given by

$$V_{\rm out} (t) = V_{\rm in} (t) - \dfrac{1}{{RC}}\int\limits_{ - \infty }^t {V_{\rm in} (t^{\prime}) e^{ - \frac{{t - t^{\prime}}}{{RC}}} }\,dt^{\prime}$$

If the time constant RC is much shorter than the duration of the pulse, only the values of V in(t') in the vicinity of t' = t contribute to the integral and we can use a Taylor expansion of the function V in(t') around t' = t, using as expansion parameter (t' – t).

$$V_{\rm out} (t) = V_{\rm in} (t) - \dfrac{1}{{RC}} \int\limits_{ - \infty }^t {.\left[ {V_{\rm in} (t) + \dfrac{{dV_{\rm in} (t)}}{{dt}}(t^{\prime} - t) + ...} \right] e^{ - \frac{{t - t^{\prime}}}{{RC}}} }\,dt^{\prime}$$

With the following change of variables: \(u = \dfrac{{t - t^\prime }}{{RC}}\), this becomes:

$$V_{\rm out} (t) = V_{\rm in} (t) - V_{\rm in} (t)\,\int\limits_0^\infty {e^{ - u} } \,du + \dfrac{{dV_{\rm in} (t)}}{{dt}}RC\,\int\limits_0^\infty {ue^{ - u} \,du + \ldots } $$
$$V_{\rm out} (t) \approx RC \dfrac{{dV_{\rm in} (t)}}{{dt}}$$

This explains the name ‘differentiator’. Figure 8.12 shows the impulse response and the transfer function for the low-pass filter and the high-pass filter.

Fig. 8.12
figure 12

Impulse response and transfer function for a low-pass filter and a high-pass filter

8.3 Amplifiers for Particle Detectors

An amplifier contains a number of components arranged in such a way as to amplify a voltage at its input. Basically an amplifier contains a succession of circuits similar to the one shown in Fig. 8.13.

Fig. 8.13
figure 13

A transistor is the basic element in an amplifier

A bare amplifier would be almost useless. Its gain would be extremely sensitive to variations in the supply voltage and the temperature. In order to stabilise the amplifier, it is necessary to have a feedback mechanism. A typical amplifier with feedback is illustrated in Fig. 8.14.

Fig. 8.14
figure 14

In a voltage amplifier, the feedback resistances R 1 and R 2 stabilise the gain

In this figure, the triangle represents an amplifier without feedback. It has a large gain, a high-input impedance and a low-output impedance. It has two inputs and it amplifies the difference between the two voltages at these inputs. Usually one input is connected to the ground and the other input receives the signal to be amplified. Notice the symbols ‘+’ and ‘−’ (minus) at the two inputs. These indicate that the amplifier is used in a reversing mode when connected as shown in Fig. 8.14. A positive input signal will give rise to a negative output signal. Therefore, this amplifier has negative feedback.

If the input impedance of the open loop amplifier is very large compared to R 1 and R 2, the current flowing through these two resistances must be the same.

$$\left\{ \begin{array}{l} I = \dfrac{{V_{\rm in} - V_a }}{{R_1 }} = \dfrac{{V_a - V_{\rm out} }}{{R_2 }} \\\noalign{} V_{\rm out} = - GV_a \\ \end{array} \right.$$
((8.1))

For the time being, we assume that G is a real and positive number.

Eliminating V a and I from Eq. (8.1), we find

$$\begin{array}{l} V_{\rm out} \, = \,\dfrac{{R_2 \,V_{\rm in} }}{{\left[R_1+ \dfrac{{(R_1+ R_2 )}}{G}\right]}} \\\noalign{}V_{\rm out}\approx \, - \dfrac{{R_2 }}{{R_1 }}\ V_{\rm in}\qquad if\,\dfrac{{R_2 }}{{R_1 }} << G \\\end{array}$$

Eliminating V a and V out, we find

$$\begin{array}{l} V_{\rm in} = I\left(R_1 + \dfrac{{R_2 }}{{G + 1}}\right) \\\noalign{} V_{\rm in} = I\,\,R_1 \,\,\,\,\,\,\,\,\,\,\,\,if\ \dfrac{{R_2 }}{{R_1 }} =\,G \\ \end{array}$$

These equations show that if the open loop gain is large compared to the ratio R 2/R 1, the gain of the amplifier with feedback is simply given by the ratio of the two resistances and that the input impedance is equal to R 1. The amplifier with feedback is very stable and its gain is hardly influenced by moderate changes in the supply voltage or the temperature.

However, the amplifier just described is not a useful amplifier for nuclear electronics. The main problem is the presence of resistances connected to input. As will be shown later in this chapter, such resistances give rise to noise and should be avoided. In addition, the gain is in general a complex function of ω. Moreover, this amplifier is well adapted for amplifying voltage signals and we are looking for an amplifier to be connected to a pulsed current source.

A possible way to avoid the problem of the resistor is to use a capacitor as the feedback element. However, at each pulse this capacitor will charge up and quickly the circuit will be out of the linear range. This needs a mechanism to reset the amplifier after each pulse. Special resetting mechanisms are sometimes used in very low-noise amplifier designs, thus completely avoiding the use of a feedback resistance. The most commonly used method for resetting the capacitor is to have a feedback resistor in parallel with the feedback capacitor as shown in Fig. 8.15. The resistor can have a very large value because it only needs to reset the capacitor after each pulse. This is called a charge-integrating amplifier.

Fig. 8.15
figure 15

(a) A charge-integrating amplifier. (b) Output of a charge-integrating amplifier if the input is a series of short current pulses

Because of the presence of a capacitor, we need to use the Fourier transforms of the currents and voltages to calculate the gain and the input impedance for the amplifier shown in Fig. 8.15. We also need to take into account the fact that the naked amplifier itself also has a transfer function and this transfer function depends on the frequency and has a frequency-dependent phase. The constant open loop gain in the previous calculation has therefore to be replaced by a complex and frequency-dependent gain G(ω). The gain of a realistic amplifier can often be approximated by

$$G(\omega ) = \dfrac{{G_0 }}{{1 + j\dfrac{\omega }{{\omega _k }}}}$$
((8.2))

For frequencies below ω k, the gain is real and constant. Around ω = ω k the phase turns by 90° and above ω k the gain decreases like 1/ω. For the charge-integrating amplifier, we do a similar calculation as we did for the voltage amplifier shown in Fig. 8.14, but now working with the Fourier transforms of the voltages and currents. As before, we assume that the absolute value of the gain and the input impedance of the naked amplifier, are very large. We have the following two equations

$$\left\{ {\begin{array}{*{20}c}{V_{\rm out} (\omega ) = - G(\omega ) V_{\rm in} (\omega ) } \\ {V_{\rm in} (\omega ) - V_{\rm out} (\omega ) = \left[ {\dfrac{1}{{R_f }} + j\omega C_f } \right]^{ - 1} .\ I_{\rm in} (\omega )} \\\end{array}} \right.$$
((8.3))

Eliminating V in from the two equations above gives

$$V_{\rm out} (\omega ) = - \dfrac{{G(\omega )}}{{G(\omega ) + 1}} .\ \dfrac{{R_f }}{{1 + j\omega C_f R_f }} .\ I_{\rm in} (\omega ) \approx - \dfrac{{R_f }}{{1 + j\omega C_f R_f }} I_{\rm in} (\omega )$$

The last equation gives us the relation between the output voltage and the input current. If the input current is a delta pulse, \(I_{{\rm{in}}} (\omega ) = \dfrac{1}{{\sqrt {2\pi } }}\), then the resulting output voltage is given by

$$V_{\rm out} = \hat H(\omega ) = \dfrac{{ - R_f }}{{\sqrt {2\pi } }} \dfrac{1}{{1 + j\omega C_f R_f }}$$

We notice that the current transfer function for a charge-integrating amplifier is the same as the transfer function of a low-pass filter multiplied by a factor -R f ! The response of a charge-integrating amplifier to a delta function like current pulse is therefore given by

$$\left\{ {\begin{array}{*{20}c} {h(t)\, = \, - \dfrac{1}{{C_f }}e^{ - \frac{t}{{R_f C_f }}} } \hfill & {t \ge 0} \hfill\\ {h(t)\, = \,0} \hfill & {t\, < \,0} \hfill\\\end{array}} \right.$$

A charge-integrating amplifier will behave as an amplifier with a ‘current to voltage gain’ at low frequency equal to R f. Its response to a current pulse will be a sharp rising edge at the moment the pulse arrives, followed by an exponential decay with time constant R f C f.

The input impedance of the charge-integrating amplifier is obtained by eliminating V out from the two equations (8.3):

$$V_{\rm in} (\omega ) = \dfrac{1}{{G(\omega ) + 1}} \dfrac{{R_f }}{{1 + j\omega C_f R_f }} .\ I_{\rm in} (\omega ) \approx \dfrac{1}{{G(\omega )}} \dfrac{{R_f }}{{1 + j\omega C_f R_f }} .\ I_{\rm in} (\omega )$$

Using this result together with Eq. (8.2), we get

$$\begin{array}{l} Z_{\rm in} (\omega ) = \dfrac{{V_{\rm in} (\omega )}}{{I_{\rm in} (\omega )}} = \dfrac{{R_f }}{{G_0 }}\dfrac{{\left(1 + j\dfrac{\omega }{{\omega _k }}\right)}}{{\left(1 + j\omega R_f C_f \right)}} \\ \\ Z_{\rm in} (\omega ) = \dfrac{{R_f }}{{G_0 }}\dfrac{{\left(1 + j\dfrac{\omega }{{\omega _k }}\right)\left(1 - j\omega R_f C_f \right)}}{{(1 + \omega ^2 R_f ^2 C_f ^2 )}} \\ \\ Z_{\rm in} (\omega ) = \dfrac{{R_f }}{{G_0 }}\dfrac{{\left(1 + j\dfrac{\omega }{{\omega _k }} - j\omega R_f C_f + \dfrac{{\omega ^2 }}{{\omega _k }}R_f C_f \right)}}{{(1 + \omega ^2 R_f ^2 C_f ^2 )}} \\ \end{array}$$

For low frequencies this impedance is a real number; it behaves like a pure resistor. However, as the frequency increases the impedance becomes self-like or capacity-like, depending on the value of the parameters. If we choose the value of the feedback capacitance and feedback resistor such that \(R_f C_f = (1/\omega _k )\), the two imaginary parts cancel and the impedance becomes a real number for all values of ω. In fact, this condition assures that the feedback is negative for all frequencies and is a necessary condition for the system to be stable. With this condition the input impedance of the system simply becomes

$$Z_{\rm in} (\omega ) = \dfrac{{R_f }}{{G_0 }}$$

The input impedance is a real and frequency independent constant. Moreover, this impedance will not be very large. These are highly desirable properties for an amplifier for particle detection. The input impedance of the amplifier must be large compared to the internal impedance of the detector itself.

In Sect. 8.5, we will show that the feedback resistance is a source of noise. In order to minimise this noise it is essential that the feedback resistance is as large as possible. On the other hand, the feedback capacitance cannot be made arbitrarily small. It should always be sufficiently large compared to any stray capacitances that are unavoidably present in the system. In practice, it is difficult to have this capacitance less than about 1 pF. The result is that R f C f must be large if we want to have low noise. Typically, this time constant will be several 100 μs.

The output pulses of the amplifier shown in Fig. 8.15 are very long pulses and this will severely limit the count rate capability of the counter. We need a way to make the output pulses short while keeping the product R f C f large. This can be achieved with a shaping stage after the charge-integrating amplifier as shown in Fig. 8.16. The total transfer function of this amplifier is simply given by the product of the transfer functions of a charge-integrating amplifier and the transfer functions of a high-pass and a low-pass filter.

$$ \hat H(\omega ) = \dfrac{{ - R_f }}{{\sqrt {2\pi } }} \dfrac{1}{{1 + j\omega R_f C_f }} \dfrac{{j\omega R_2 C_2 }}{{1 + j\omega R_2 C_2 }} \dfrac{1}{{1 + j\omega R_3 C_3 }} $$
Fig. 8.16
figure 16

Charge-integrating amplifier with CR–RC shaping

If we chose the values of the different capacitances and resistances such that

$$ \tau = C_2 R_2 = C_3 R_3\ {\rm{and}}\ {\rm{ }}\tau {\rm{ < < }}\tau _f = C_f R_f $$

Omitting the factor ‘–R f’, this transfer function can be written as

$$\hat H(\omega ) = \dfrac{1}{{\sqrt {2\pi } }} \dfrac{1}{{1 + j\omega \tau _f }} \dfrac{{j\omega \tau }}{{(1 + j\omega \tau )^2 }}$$
((8.4))

Taking the Fourier transform of the above expression (as can be obtained with some calculations starting from a table of Fourier transforms, see Exercise 2) one gets

$$\left\{ {\begin{array}{*{20}c} {\hat h(t)\, = \, - \dfrac{1}{{\tau (\tau _f- \tau )^2 }}\left[(\tau ^2 \, + \,t(\tau _f- \tau ))\,e^{ - t/\tau }\,-\tau^2 e^{ - t/t_f }\right ]} \hfill & \quad{t \ge 0} \hfill\\ {\hat h(t)\, = \,0} \hfill & \quad{t < 0} \hfill\\\end{array}} \right.$$

If Ï„ << Ï„ f and for values of t of the order of Ï„, we have

$$\hat h(t) \approx \dfrac{t}{{\tau \tau _f }} e^{ - t/\tau } $$

The output pulses of this amplifier are shown in Fig. 8.17(a). We have indeed managed to keep the feedback resistor R f large, while at the same time producing short output pulses. This RC−CR shaping is often used because of its simplicity. Its biggest drawback is the long negative tail after the main pulse. At high rates this can cause a baseline shift. In particular, if one wants to use this circuit for measuring pulse heights, it is totally unacceptable because it will broaden all peaks and in this way ruin the energy resolution.

Fig. 8.17
figure 17

(a) Impulse response of a charge-integrating amplifier with a simple CR−RC shaping stage. The impulse response in this figure was multiplied by τ f /τ, such that the integral over the positive part of the function approximately equals one. (b) Output pulse of a charge-integrating amplifier with pole zero cancellation for a delta function like input pulse

From the expression of the Fourier transform, one can see that this undershoot is caused by the factor \(\left(\dfrac{1}{{1 + j\omega \tau _f }}\right)\) in the transfer function. This introduces a pole close to ω = 0. If we find a way to cancel this factor, the undershoot will be removed. This can be achieved with a somewhat more complicated amplifier design and is referred to in the literature as ‘pole zero cancellation’. Figure 8.18 shows a charge-integrating amplifier with ‘pole zero cancellation’ and a simple shaping stage. The total transfer function of this amplifier is again simply the product of the contributions of the three parts. With proper choice of the values for the components, the pole in the factor \(\dfrac{{R_f }}{{1 + j\omega\ C_f R_f }}\) is exactly cancelled by the factor \((1 + j\omega\ R_0 C_1 )\). In this way, the undershoot is removed. The last factor represents the shaping circuit. With this particular choice of the shaping stage and with a proper choice of the values of the components, the resulting transfer function can be written as

$$\hat H(\omega ) = \dfrac{1}{{1 + j\omega \tau }} . \dfrac{1}{{1 + {{(5} \mathord{\left/{\vphantom {{(5} {3)}}} \right.\kern-\nulldelimiterspace} {3)}}j\omega \tau - \omega ^2 \tau ^2 }}$$
Fig. 8.18
figure 18

(a) Charge-integrating amplifier with pole zero cancellation and a shaping stage. This figure shows one among many possibilities for the shaping stage. The transfer function for this amplifier is the product of the three parts shown in (a), (b), (c). With the proper choice of components the pole zero will be cancelled

The Fourier transform of this last expression is shown in Fig. 8.17(b). This last figure was obtained with a numerical Fourier transform and the program Mathematica.

Many different shaping circuits can be used and this is only one particular example. A popular shaping circuit is the CR−(RC)4 shaping. This is a CR circuit followed by 4 RC circuits connected through amplifiers. This produces a nearly Gaussian output pulse with a time structure given by

$$V_{\rm out} (t) \propto \left( {\dfrac{t}{\tau }} \right)^4 e^{ - t/\tau } $$

The results obtained above are valid provided the open loop gain is well represented by Eq. (8.2). Obviously, above some large angular frequency, ω max, the gain will drop faster than expression (8.2) and the results above no longer apply. In particular, it is not possible to make the output pulse shorter than \( \approx 2\pi /\omega _{\max } \), therefore limiting the maximum rate the amplifier can handle.

The amplifier we have just described is very well suited for measuring weak and fast electrical pulses. It has a small and real input impedance and the feedback resistor can be chosen very large, minimising the noise.

8.4 The Thermal Noise of a Resistor

Any piece of matter is made up of electrons and nuclei. These charges are constantly in motion owing to the thermal agitation. It is obvious that these motions will induce small voltage and current fluctuations in any piece of material and in particular in any resistor.

A priori we do not know what these noise signals will look like, but the Norton and Thevenin theorems tell us what the electrical equivalent of the noise sources will be like. The Thevenin theorem states that any two-terminal network of resistors and voltage sources is equivalent to a single resistor in series with a single ideal voltage source. The Norton theorem states that it is also equivalent to a single resistor in parallel with ideal current source. An ideal voltage source has zero internal resistance and an ideal current source has infinite internal resistance. Obviously, the value of the resistor is the same and the current and voltage sources are related by V = RI.

From the Thevenin and Norton theorems, we expect that the thermal noise will be equivalent to a small current source or a small voltage source in parallel or in series with the resistor as indicated in Fig. 8.19. The average value of these voltages or currents will be zero, but at any particular instant in time we expect to measure a small but non-zero value of the voltage or current. The mean square deviation (r.m.s.) will be different from zero.

$$\left\langle {V_{\rm noise}^2 } \right\rangle \ne 0;\ \left\langle {I_{\rm noise}^2 } \right\rangle \ne 0$$
Fig. 8.19
figure 19

Equivalent networks for the thermal noise in a resistor. The two diagrams above will be equivalent if V noise = R I noise

As usual the square brackets \(\left\langle x \right\rangle \) denote the average of x. From the central limit theorem, we also know that these fluctuations will have a Gaussian distribution.

The voltage noise or current noise is completely characterised by its mean square deviation. The first important point to be made is that this noise does not depend on the nature of the resistor, it only depends on the value of the resistance. To see this consider a thought experiment illustrated in Fig. 8.20.

Fig. 8.20
figure 20

The second law of thermodynamics requires that the thermal noise of all resistors with the same value of the resistance, is same

Two resistors with the same value of the resistance but made from different materials are connected as shown in Fig. 8.20. Each resistor is enclosed in a thermally isolated box. Assume that for a moment the resistor in the left box has noise characterised by a given \(\left\langle {I_{\rm noise}^2 } \right\rangle \ne 0\) and the resistor in the right box has no noise. The current generator in the left box will induce equal currents in the two resistances and in this way induce heat in each resistor. The energy to produce this heat is extracted from the left box by the noise current generator so, with time, the left box will become cold and the right box will become hot. This situation would be in total contradiction to the second law of thermodynamics. The only way out is to assume that both resistors have exactly the same noise. Following the same argument but considering two different resistors, one readily finds that \(\left\langle {I_{\rm noise}^2 } \right\rangle \propto \dfrac{1}{R}\) and \(\left\langle {V_{\rm noise}^2 } \right\rangle \propto R\).

We are now only left with the task of finding the proportionality constant. If there is one particular resistor for which we can calculate the thermal noise, we have solved the problem. There is indeed such a device, namely the ideal transmission line. We will show that an ideal transmission line will behave like a purely ohmic resistor with a resistance equal to its characteristic impedance. Moreover, we will show that it is possible to calculate the thermal noise in a transmission line.

A transmission line is a very important concept in fast electronics. We tend to think of a connecting wire in an electronics circuit as something that has negligible capacitance and negligible self-inductance and where any voltage applied at one end of the wire is immediately present at the other end. As faster and faster signals are used, this assumption becomes less and less valid. A practical rule of thumb is the following: we can think of connecting lines in an electronics circuit in the conventional way as long as the length of the wires is less than 2% of the rise time of the signal multiplied by the speed of light. In nuclear detectors, electronics pulses as short as 10 ns are common. For such pulses the maximum allowable wire length is 6 cm! If any longer wires are used the behaviour of the circuit will be totally unpredictable. To transport a signal over a longer distance one needs to use a transmission line!

Transmission lines come in many variants, but as far as nuclear electronics is concerned the most common form of a transmission line is the shielded coaxial cable. A coaxial cable is shown schematically in Fig. 8.21.

Fig. 8.21
figure 21

A typical coaxial cable. The braided shield is made from tightly woven fine wires such as to allow the cable to remain flexible

This geometry serves the following purposes: the braided shield protects the inner conductor carrying the signal from any pick-up noise and the structure of the wire makes it behave as a transmission line.

For the purpose of the present argument we will consider an ideal transmission line with a capacitance per unit length C and a self-inductance per unit length of L. Capacitance and self-inductance are constant along the line. Figure 8.22 shows a transmission line represented as a set of discrete components. An ideal transmission line has negligible ohmic resistance in the conductors and no leakage currents. While leakage currents are usually indeed negligible, the resistance of the wire is usually not negligible. This resistance causes an attenuation of the signal. The exact value of this attenuation depends on the details of the structure of the line, but an attenuation length of the order of 100 m is typical at a frequency of ≈100 MHz. An ideal transmission line without resistance does not exist. It is nevertheless useful for the purpose of the present argument.

Fig. 8.22
figure 22

A transmission line can be seen as a succession of identical infinitesimal networks like the ones shown here

A transmission line can be viewed as a succession of a large number of sections of length Δx, each with a capacitance CΔx and a self-inductance LΔx. In the limit Δx → 0 this will behave like a real transmission line.

Consider one small section of the line with length Δx. A capacitor and a self-inductance in the time domain is described by a first order differential equation. Over this section the change in voltage and the change in current are given by

$$\left\{ {\begin{array}{*{20}c} {\Delta V = - L\Delta x\dfrac{{dI}}{{dt}}} \\ \\ {\Delta I = - C\Delta x\dfrac{{dV}}{{dt}}} \\\end{array}} \right.$$

Taking the limit for Δx → 0, we obtain the following two differential equations for an ideal transmission line.

$$\left\{ {\begin{array}{*{20}c} {\dfrac{{dV(x,t)}}{{dx}} = - L\dfrac{{dI(x,t)}}{{dt}}} \\ \\ {\dfrac{{dI(x,t)}}{{dx}} = - C\dfrac{{dV(x,t)}}{{dt}}} \\\end{array}} \right.$$

These equations hold at any point along the line. From these two first order equations we obtain the following two second order equations:

$$\left\{ {\begin{array}{*{20}c} {\dfrac{{d^2\,V(x,t)}}{{dx^2 }} = LC\dfrac{{d^2\,V(x,t)}}{{dt^2 }}} \\ \\ {\dfrac{{d^2 I(x,t)}}{{dx^2 }} = LC\dfrac{{d^2 I(x,t)}}{{dt^2 }} } \\\end{array}} \right.$$

The current and the voltage at each point along the line satisfy the string equation!

One readily verifies that any function of the variable \((x - t/\sqrt {LC} )\) or of the variable \((x + t/\sqrt {LC} )\) is a solution to the string equation. The most general solution takes the form

$$\begin{array}{l} V(x,t) = f_1 \left(x - \dfrac{t}{{\sqrt {LC} }}\right) + f_2 \left(x + \dfrac{t}{{\sqrt {LC} }}\right) \\ \\ I(x,t) = \sqrt {\dfrac{C}{L}} f_1 \left(x - \dfrac{t}{{\sqrt {LC} }}\right) - \sqrt {\dfrac{C}{L}} f_2 \left(x + \dfrac{t}{{\sqrt {LC} }}\right) \\ \end{array}$$

In these equations f 1 and f 2 are arbitrary functions of one variable. This solution represents a sum of two waves travelling at a velocity \(v_0 = \dfrac{1}{{\sqrt {L.C} }}\), one wave travelling towards increasing values of x, the other wave travelling towards deceasing values of x. The actual shape of these waves will be determined by the boundary conditions. If we apply a variable voltage to one end of the line, this signal will travel along the line with a velocity v 0. If only one wave is present, the current and the voltage at any point along the line are related by

$$V = \sqrt {\dfrac{L}{C}}\,{\rm{I}}.$$

This will also apply at the end of the line. Hence, if we apply a voltage V at the end of the line, we will induce a current in the line given by the equation above; therefore, the line will behave as a resistor with resistance equal to the characteristic impedance of the line

$$Z_0 = \sqrt {\dfrac{L}{C}}.$$

This is a remarkable result because this impedance is a real number; this impedance is purely ohmic! Of course this is only true if the only wave present in the transmission line is the wave travelling away from the measurement point. After some time this wave will reach the other end of the line and be reflected back. When this reflected wave reaches the measurement point the relation no longer holds. Only an infinitely long transmission line will behave like a true resistor.

Because of the arguments at the beginning of this section, the thermal noise of the line will be the same as the thermal noise of a resistor with the same value of the resistance. But for this unusual resistor it is possible to calculate the thermal fluctuations! The voltage V(x,t) and the current I(x,t) satisfy the string equation. A string in thermal equilibrium with its surroundings will vibrate at all the stationary vibration modes of this string. Assume a transmission line with length D and with open ends. These stationary solutions are of the form f(x).g(t) and the boundary conditions are that the current at the beginning and the end of the line should be zero: I(x = 0,t) = I(x = D,t) = 0.

We now look for stationary solutions of the form:

V(x,t) = V'(x) · V"(t)

I(x,t) = I'(x) · I"(t)

The stationary solutions can be found to be (see Exercise 6):

$$\begin{array}{*{20}c} {\left\{ \begin{array}{l} I_n (x,t) = I_n \sin \left( {\dfrac{{n\pi x}}{D}} \right)\sin \left( {\dfrac{{n\pi v_0 t}}{D} + \varphi _n } \right) \\ V_n (x,t) = I_n \,Z_0 \cos \left( {\dfrac{{n\pi x}}{D}} \right)\cos \left( {\dfrac{{n\pi v_0 t}}{D} + \varphi _n } \right) \\ \\ \end{array} \right.} &\quad {n = 1,\,\,\,..\infty } \\\end{array}$$

These are the well-known stationary vibration modes of the string, I n and Ï• n are arbitrary constants to be determined by the boundary conditions. Figure 8.23 shows the solutions for n = 1, 2 and 3.

Fig. 8.23
figure 23

The fundamental frequency and the first two harmonics of a vibrating string. The value of the current in an ideal transmission line with open ends follows the same pattern

The energy contained in a wave of wave number n is given by

$$ E_n = \frac{1}{2} \int\limits_0^D {(CV^2 + LI^2 )\ dx = \frac{{I_n^2 }}{4}} DL $$

This relation allows us to express the amplitude of the wave as a function of the energy of the wave. The voltage that will be observed at the end of the transmission line is given by

$$ V(x = 0) = \sum\limits_n {V_n } = \sum\limits_n {\sqrt {\dfrac{{4E_n }}{{DC}}} } \cos \left( {\dfrac{{n\pi v_0 t}}{D} + \varphi _n } \right) $$

The average of this voltage is zero and the average square voltage is obtained by averaging these elementary noise waves over the amplitudes and over time. Let us first consider the time averaging only

$$\left\langle {V^2 (x = 0)} \right\rangle = \dfrac{1}{T} \int\limits_{ - T/2}^{ + T/2} {V^2 (x = 0)\; dt = \dfrac{1}{T} \int\limits_{ - T/2}^{ + T/2} {\sum\limits_n {\sum\limits_{n^{\prime}} {V_n V_{n^{\prime}} } } } \; dt} $$

The integral is to be taken in the limit that T goes to infinity. In calculating this time average, all the terms where n is different from n' vanish. To see this just remember that cos(ω 1 t)cos(ω 2 t) = cos[(ω 1 + ω 2)t] + cos[(ω 1 − ω 2)t]. The sum reduces to

$$\begin{array}{l} \left\langle {V^2 (x\, = \,0)} \right\rangle \, = \,\dfrac{1}{T}\,\displaystyle\int\limits_{ - T/2}^{ + T/2} {\displaystyle\sum\limits_n {V_n^2 } \,dt}\\ \quad\quad\quad\quad\quad\;= \,\displaystyle\sum\limits_n {\dfrac{{4E_n }}{{DC}}} \,\dfrac{1}{T}\,\displaystyle\int\limits_{ - T/2}^{ + T/2} {\cos ^2 \left( {\dfrac{{n\pi \nu _0 t}}{D} + \varphi _n } \right)\,dt}\\\end{array}$$

Using \(\cos ^2 (x) = \dfrac{1}{2}[1 + \cos (2x)]\) this becomes

$$\left\langle {V^2 (x = 0)} \right\rangle = \sum\limits_n {\dfrac{{2E_n }}{{DC}}} $$

If the transmission line is in thermal equilibrium with its surroundings, each mode of vibration will be present with a random amplitude and a random phase. According to the equipartition theorem, each quadratic term in the Hamiltonian contributes kT/2 to the energy of the system. The Hamiltonian per unit volume for the electromagnetic wave is given by \(\dfrac{1}{{4\pi }}(E^2 + B^2 )\). Therefore, each mode of vibration will acquire an average energy <En> = kT, where k is the Boltzmann constant and T is the absolute temperature. The average noise voltage that will be observed at the end of the line is therefore given by

$$\left\langle {V^2 (x = 0)} \right\rangle = \sum\limits_n {\dfrac{{2kT}}{{DC}}} $$

The sum runs over all frequencies that are present in the system, each contributing the same amount to the total noise.

We can now calculate the noise contribution of all the waves with angular frequency in the interval (ω, ω + dω). Since \(\omega = \dfrac{{n\pi v_0 }}{D}\), the number of waves in the interval dω is given by \(dn = \dfrac{D}{{\pi v_o }}d\omega \); this noise contribution is given by

$$\left\langle {V^2 (x = 0)} \right\rangle = \dfrac{2}{\pi }kT Z_0\,d\omega $$

Notice that the length of the line D has disappeared from the equation and this equation also applies for an infinitely long transmission line. However, this noise is observed by an instrument that is characterised by a transfer function H(ω) and from property 6 of the transfer functions we know that the observer sees, for each wave with angular frequency ω, an amplitude \(\sqrt {2\pi } |H(\omega )|\).

Hence the total ‘observed’ noise, integrated over all frequencies is given by

$$\left\langle {V_{\rm noise}^2 } \right\rangle = 4kT Z_0 \int\limits_0^{ + \infty } {|H(\omega )|^2\,d\omega } $$

At the end of the open transmission line we will measure a noise voltage given by the formula above. We can hence conclude that for any resistor the noise voltage and the noise current are characterised by

$$\begin{array}{*{27}c} {\left\{ \begin{array}{l}\left\langle {V_{\rm noise}^2 } \right\rangle = 4kTR \int\limits_0^{ + \infty } {|H(\omega )|^2\,d\omega }\quad\quad\quad\quad\quad\quad\quad\quad(8.5)\\ \left\langle {I_{\rm noise}^2 } \right\rangle = \dfrac{{4kT}}{R} \int\limits_0^{ + \infty } {|H(\omega )|^2\,d\omega }.\quad\quad\quad\quad\quad\quad\quad\quad(8.6)\end{array}\right.}\nonumber \end{array}$$
((8.5))

In this equation k is the Boltzmann constant and T is the absolute temperature. Equations (8.5) and (8.6) are in fact integrals over the current impulse response, but this distinction is usually not made in the literature, and will also not be made here.

Equations (8.5) and (8.6) should be understood as follows. The voltage over a resistor has a fluctuating value. If this voltage is measured at random moments, every time a different value will be obtained. Since this noise originates from a large number of random fluctuations, the voltages will have a Gaussian distribution. The average of this distribution is zero and the expression above represents the variance of this distribution. The square root of this quantity is the standard deviation (or r.m.s.) of the current or voltage noise. This situation is illustrated in Fig. 8.24.

Fig. 8.24
figure 24

Pulse samples taken at random times have a Gaussian amplitude distribution. The r.m.s. of this distribution is a measure of the noise present in the system

Each frequency interval contributes in the same way to the noise. This is so-called ‘white noise’. If the measurement system has a unit gain up to a maximum frequency, f max, and then has a sharp cutoff, we have

$$\int\limits_0^{ + \infty } {|H(\omega )|^2 d\omega } = f_{\max } = {\rm{Band\,width\,of\,the\,circuit}}$$

The mean square noise is proportional to the bandwidth and hence the r.m.s. noise is proportional to the square root of the bandwidth.

Let us illustrate this with a numerical example: at room temperature (≈293 K) \(4kT=1.62\times10^{-20}[{\rm V}^2/({\rm Hz}.\Omega)]\). The thermal noise of a 1 \(M\Omega\) resistor at room temperature measured with a voltmeter with a bandwidth of 100 MHz has an r.m.s. voltage noise of 1.27 mV. Assume a detector with a fairly typical capacitance of 30 pF. It needs about 2 × 105 electron charges to produce a voltage of 1 mV over this detector. This example makes it abundantly clear that the thermal noise is going to be an essential consideration in nuclear electronics.

Using property 9 of the transfer functions we can also write the noise as a function of the impulse response in the time domain:

$$\left\{ {\begin{array}{*{20}c} {\left\langle {V_{\rm noise}^2 } \right\rangle = 2kTR \displaystyle\int\limits_{ - \infty }^{ + \infty } {h(t)^2\,dt} } \\ {\left\langle {I_{\rm noise}^2 } \right\rangle = \dfrac{{2kT}}{R}\displaystyle\int\limits_{ - \infty }^{ + \infty } {h(t)^2\,dt} } \\\end{array}} \right.$$

Equation (8.5) seems to imply that, as the value of the resistor goes to infinity, the voltage noise over this resistor will also go to infinity. While this is strictly speaking correct, one will never measure an infinite noise voltage because the input impedance of the measuring device will always have some large but finite value. If R inp denotes the value of the input impedance of the voltage meter, the measured voltage will be given by

$$\sqrt {\left\langle {V_{\rm noise}^2 } \right\rangle _{\rm measured} } = \sqrt {\left\langle {V_{\rm noise}^2 } \right\rangle } \dfrac{{R_{\rm inp} }}{{R + R_{\rm inp} }} = \dfrac{{R_{\rm inp} }}{{R + R_{\rm inp} }} \sqrt {4kTR\int\limits_0^{ + \infty } {|H(\omega )|^2\,d\omega } } $$

Taking now the limit R to infinity, one sees that the measured noise voltage goes to zero. Similarly, if one measures the noise current over a zero resistance, the measured value will be zero.

8.5 Resistor and Transistor Noise in Amplifiers

Any resistor in the detector readout electronics contributes to the noise, but obviously, the resistors in the front-end part, close to the detector itself, will make the biggest contribution to the noise. We have already seen in previous sections that there are at least two unavoidable resistances in the front-end part of the readout electronics. First, there is the resistance of the detector itself and there is the feedback resistance. This feedback resistance is connected on one side to the input of the amplifier and on the other side to the output of the amplifier. Since the amplifier has a low output impedance, from the noise point of view, this is the same as a resistor between the input and the ground. In both cases this behaves as a resistor that is in parallel with the detector itself. These are collectively referred to as a parallel resistor. Below we shall study what the noise effect of such a parallel resistor will be. We also consider the effect of a resistor between the detector and the input of the amplifier. There are often technical reasons to have such a resistor. Such a series resistor has a different noise effect, as will become clear below. Finally, we discuss the noise contribution stemming from the resistance of the front-end transistor itself.

8.5.1 Noise Contribution of a Parallel Resistor or a Series Resistor

Consider a charge amplifier with a resistor in parallel to the detector element, as illustrated in Fig. 8.25. The amplifier has a real and frequency independent input impedance and this impedance is small compared to the internal resistance of the detector. We also consider the output signal normalised to unit gain. Therefore, it is not necessary to distinguish the current and voltage impulse response or transfer functions. The parallel resistor R p will generate a noise current given by

$$ \left\langle {I_{\rm noise}^2 } \right\rangle = \dfrac{{4kT}}{{R_p }} \int\limits_0^{ + \infty } {|H(\omega )|^2\,d\omega } $$
Fig. 8.25
figure 25

A charge-integrating amplifier with a resistor in parallel with the detector

What matters is how the noise generated by the resistor compares to the detector signal. The detector is a current source and the signal has a certain amount of charge, usually expressed as a certain number of electrons. If we assume that this charge signal is generated in a short time, the response of the amplifier to a signal is equal to the impulse response. The noise is usually expressed by the quantity ‘equivalent noise charge’ (ENC), which is defined as a hypothetical charge produced in the detector that gives a peak output response equal to the r.m.s. of the noise. The concept of `equivalent noise charge’ is illustrated in Fig. 8.27.

If the current impulse response of the amplifier to a unit current pulse is given by \(h(t)\), the response to a charge Q is given by \(Q h(t)\) and the maximum value reached by this pulse is given by \(Q h_{\max } (t)\). From the definition of ENC we therefore have

$$\begin{array}{l} (ENC_p\ h_{\max } )^2 \, = \,\left\langle {I_{\rm noise}^2 } \right\rangle\\ \\ENC_p^2 \, = \,\dfrac{{4kT}}{{R_p }}\dfrac{1}{{h_{\max }^2 }}\displaystyle\int\limits_0^\infty{|H(\omega )|^2 } \,d\omega\\\end{array}$$

This equation is not elegant since it mixes quantities in the time domain and quantities in the frequency domain. Using property 9 of the transfer functions this can be written as

$$ENC_p^2 = \dfrac{{4kT}}{{R_p }} \dfrac{1}{{2 h_{\max }^2 }} \int\limits_{ - \infty }^\infty {\left[ {h(t)} \right]^2 dt}$$

The equivalent noise charge is usually expressed as a number of electron charges, rather than as a number of Coulombs.

Let us now consider the effect of a resistor R s between the detector and the amplifier. Such a resistor is in series with the detector. The noise current source associated with this resistor is not equivalent to a noise source in parallel with the detector and therefore cannot be compared directly with a detector signal. We should calculate the noise spectrum of an imaginary current source in parallel with the detector, which will generate exactly the same noise currents as the noise of the series resistance R s.

For this calculation, it is convenient to start from the representation of the noise of the resistor R s as a noise voltage source in series with the resistor as shown in Fig. 8.26. Consider the circuit loop formed by the detector, the resistor R s and the input of the amplifier. In any realistic set-up, the impedance of the detector capacitance will by far be the largest impedance in the loop. At this point we have to remember that the noise voltage of the resistor is due to a sum of a large number of elementary noise voltage signals. In the frequency interval (ω, ω+dω), the elementary noise voltage signals are given by

$$V_{\rm noise} (t) = \sqrt 2 a \sin (\omega t + \varphi )$$
Fig. 8.26
figure 26

A charge-integrating amplifier with a resistor in series with the detector

Fig. 8.27
figure 27

Output signal of the amplifier with noise. The output pulse corresponding to a charge pulse equal to one ENC is also shown

In this expression ‘a’ is a real and positive number representing the amplitude for a particular elementary noise wave. The phase ϕ has a random probability distribution with each value of the phase being equally likely. The amplitudes ‘a’ also have a probability distribution, but we do not need to know this distribution. We only need to require that this distribution is such that the Eq. (8.5) is satisfied. This means that the probability distribution of ‘a’ has to satisfy the condition

$$\left\langle {V_{\rm noise}^2 } \right\rangle = \left\langle {a^2 } \right\rangle = 4kT R_s \dfrac{{d\omega }}{{2\pi }}$$

Each of these elementary noise waves will cause an elementary noise current in the detector. The noise current generated by each elementary noise signal is given by

$$I_{\rm noise}\,=\,\omega\,C_d\,\sqrt 2a\,\sin (\omega t + \varphi + \pi /2)$$

In calculating the average square noise current the phase factor π/2 is unimportant, since all values of the phase are equally likely. The series resistor therefore induces a noise current identical to the current induced by a current source in parallel with the detector and with an average square noise current given below. As before we have to remember that this sine wave will be observed by some electronics characterised y a transfer function H(ω) and that therefore the amplitude of the wave is multiplied by \(\sqrt {2\omega } \left| {H(\omega } \right|\)

$$\langle I_{\rm noise}^2 \rangle = C_d^2 \,4kT\,R_s \,\int\limits_0^\infty {{\rm{\omega }}^{\rm{2}} \left| {H({\rm{\omega }})} \right|^2 \,d{\rm{\omega }}}$$

The equivalent noise charge caused by a series resistor is therefore given by

$$\begin{array}{l} (ENC_{\rm series}\,h_{\max } )^2\,=\,C_d^2\,4kT\,R_s\,\displaystyle\int\limits_0^\infty\,{\omega ^2 \left| {H(\omega )} \right|^2\,d\omega } \\ ENC_{\rm series}^2\,=\,\dfrac{{4kT}}{{h_{\max }^2 }}\,C_d^2\,R_s\,\displaystyle\int\limits_0^\infty {\omega ^2\,\left| {H(\omega )} \right|^2\,d\omega } \\ \end{array}$$

This equation is not elegant since it mixes quantities in the time domain and quantities in the frequency domain. Using property 9 of the transfer functions this can be written as

$$ENC_{\rm series}^2\,=\,\dfrac{{4kT}}{{2h_{\max }^2 }}\,C_d^2\,R_s\,\int\limits_{ - \infty }^{ + \infty } {\left( {\dfrac{{dh(t)}}{{dt}}} \right)^2\,dt}$$

The expressions for the ENC of a series resistor and a parallel resistor are usually written in a slightly different way

$$\begin{array}{l} ENC^2 = \dfrac{{4kT}}{{R_p }}\tau a_1\,+\,4kT\,C_d^2\,R_s \dfrac{{a_2 }}{\tau } \\ a_1 = \dfrac{1}{{2\tau \hat h_{\max }^2 }} \displaystyle\int\limits_{ - \infty }^{ + \infty } {h^2 (t)\,dt} \\ a_2 = \dfrac{\tau }{{2\hat h_{\max }^2 }} \displaystyle\int\limits_{ - \infty }^{ + \infty } {\left( {\dfrac{{dh}}{{dt}}} \right)^2\,dt} \\ \end{array}$$
((8.7))

The symbol Ï„ represents the risetime of the output signal. It is easy to see that the quantities a 1 and a 2 are dimensionless numbers. Moreover, for any realistic shaping function, these coefficients are of order unity. This is illustrated in the Table 8.1 showing the value of these coefficients for a few simple shaping functions.

Table 8.1 Value of the coefficients a 1 and a 2 for some typical shaping functions

Equation (8.7) is the commonly used expression for the noise of an amplifier. These equations make it clear that any parallel resistor should be as large as possible. Here we see the reason why the impedance of the feedback resistor should be large. The equations also tell us that any series resistor, if present at all, should be as small as possible. The serial resistor and the parallel resistor in this expression are, in fact, the combined effect of a number of different resistors at different places in the circuit. Below we show that, for example, the first transistor of the amplifier has a noise effect as if it was a resistor in series with the detector.

It is instructive to look at a numerical example. Consider a detector with a capacitance of 30 pF and a series resistor of 1 kΩ. Assume, furthermore, a risetime of 100 ns and a feedback resistor of 100 MΩ. In this calculation, we take the coefficients a 1  = a 2 = 1. The equivalent noise charge caused by this feedback resistor is 25 electrons and the equivalent noise charge caused by the series resistor is 55 electrons.

For the sake of definiteness let us assume that the amplifier has a bandwidth of 10 MHz. One can consider that we have 107 independent noise amplitude samples per second. (This is not quite exact but close enough for the sake of the argument.) If no true signal is present, the samples contain only noise and this noise has a Gaussian distribution with average value zero and an r.m.s. equal to 1 ENC. A sample from a Gaussian distribution has 15% chance of exceeding the average value by one standard deviation. If we use a discriminator set at a threshold corresponding to 1 ENC, it will be triggered almost 106 times per second. A signal of 1 ENC will be completely lost in the noise. If we set a threshold corresponding to 6 ENC charges, the probability that a noise fluctuation produces such an event is about 10−7. Hence the noise will fake a true signal pulse about once per second. As a rule of thumb, we can say that true events should have a charge of at least about 10 ENC to be comfortably visible above the noise.

8.5.2 Noise Due to the First Transistor

We now turn to the calculation of the noise of the amplifier itself. The front-end part of a typical low-noise amplifier is shown in Fig. 8.28. The conducting channel in the first transistor is a resistor, and the value of this resistor is modulated by the input voltage of the amplifier. This resistor will give rise to noise. This first transistor is very often a ‘field effect transistor’ (FET). The reasons for this and the relative merits of FET transistors compared to bipolar transistors will become clear later. For the time being, let us consider the case of an FET transistor.

Fig. 8.28
figure 28

First transistor in a detector for nuclear electronics. The capacitance C t in this figure represents the capacitance of the transistor itself. The noise of the source-to-drain channel is equivalent to noise source in parallel with the detector

The internal structure of an FET transistor is shown in Fig. 8.29(a). The two regions of n-type silicon are called the source and the drain. There is a gate electrode separated from the surface of the p-type silicon by a thin silicon oxide insolating layer. If no voltage is present on the gate, no current will flow between the source and the drain because there is always an n–p junction preventing this current flow. If the gate is brought at a positive voltage of a few volts, the electric field will open up a conducting channel between the source and the drain and a current will flow. The intensity of the current will depend on the gate voltage. The transconductance of the FET is defined as

$$g_m (V_g ) = \dfrac{{dI_d }}{{dV_g }}$$
Fig. 8.29
figure 29

(a) Physical layout of an FET transistor, (b) Highly schematic representation of the same

In this equation, V g and I d are the gate voltage and the source-to-drain current, respectively. The FET behaves like a resistor in the source-to-drain channel. Notice that the source-to-drain resistor, R sd, in the transistor does not behave like a normal resistor and we will use the expression ‘effective noise resistance’ for it. There is also a small capacitance C t between the gate and the source. This capacitance is in parallel with the detector capacitance and therefore should be added to it. This transistor capacitance may seem a minor complication, but it will turn out that this capacitance plays an essential role in determining the noise and we have therefore included it explicitly in our calculations.

The resistor R sd will cause an average square noise current in the source-to-drain channel given by

$$\left\langle {I_{\rm noise}^2 } \right\rangle = \dfrac{{4kT}}{{R_{sd} }} \int\limits_0^\infty {\left| {H(\omega )} \right|^2 d\omega }$$
((8.8))

To find the ENC, we need to calculate what hypothetical noise source in parallel with the detector will cause a current in the source-to-drain channel identical to Eq. (8.8). The noise will be due to an apparent noise voltage at the gate given by

$$\langle V_g^2 \rangle = \dfrac{{4kT}}{{g_m^2 R_{sd} }}\int\limits_0^{ + \infty } {\left| {H(\omega )} \right|^2 \,\,d\omega }$$

Following the same argument as used when calculating the noise due to a series resistor, we conclude that this gate voltage corresponds to an apparent noise current source in parallel with the detector and with a noise current given by

$$\left\langle {I_{\rm noise}^2 } \right\rangle = (C_d + C_t )^2 \dfrac{{4kT}}{{g_m^2 R_{sd} }} \int\limits_0^\infty {\omega ^2 \left| {H(\omega )} \right|^2 d\omega } $$
((8.9))

The noise caused by the resistance of the source-to-drain channel is therefore equivalent to an apparent noise current source in parallel with the detector with a noise spectrum given by Eq. (8.9). The corresponding equivalent noise charge is therefore

$$\begin{array}{l} ENC^2 = \dfrac{{4kT}}{{R_{sd} }}\dfrac{{(C_d + C_t )^2 }}{{g_m^2 }} \dfrac{1}{{h_{\max }^2 }} \displaystyle\int\limits_0^\infty {\omega ^2 } |H(\omega )|^2 d\omega \\ \\ ENC^2 = \dfrac{{4kT}}{{R_{sd} }}\dfrac{{(C_d + C_t )^2 }}{{g_m^2 }} \dfrac{{a_2 }}{\tau } \\ \end{array}$$

We see that this noise depends on the temperature and the shaping time exactly in the same way as if it were a resistor in series.

The structure of an FET is shown in Fig. 8.28. It is a narrow strip that can be made arbitrarily long. Obviously, the transconductance of the FET is proportional to the length of this strip. It therefore seems that we can make this noise as small as we wish by making the strip sufficiently long and therefore the transconductance sufficiently large. It is indeed possible to make the transconductance very large, but in doing so we will also make the capacitance C t very large. The FET capacitance C t is proportional to the transconductance as will be shown below.

Consider the volume indicated by the box in Fig. 8.29(b).

Applying Maxwell’s equation to this volume we can write

$$\int {Dds = \int {\rho dv} } = 0$$

The surface integral over the electric field is zero, therefore the total charge inside this volume must be zero. We can only change the number of charges in the conduction channel of the FET by changing the number of charges in the gate and we have

$$dQ_{\rm channel} = dQ_{\rm gate} = C_t dV_g$$

But the source-to-drain current is related to the number of charges in the conduction channel and the transit time of these charges by

$$I_d = \dfrac{{Q_{\rm channel} }}{{t_{\rm transit}^{} }}$$

Therefore, any change in the source-to-drain current is caused by a change in the number of charges in the channel and we have

$$dI_d = \dfrac{{dQ_{\rm gate} }}{{t_{\rm transit} }} = \dfrac{{C_t \,dV_g }}{{t_{\rm transit} }}$$

We therefore have the following relation between the transconductance g m, the capacitance C t and the transit time of the charges through the FET channel t transit:

$$\dfrac{{dI_d }}{{dV_g }} = g_m = \dfrac{{C_t }}{{t_{\rm transit}^{} }}$$

Furthermore, it is possible to show that (see Sect. 7.2.4 in [2] in Chap. 5)

$$R_{\rm ds} = \dfrac{3}{2}.\dfrac{1}{{g_m }}$$

We can now express g m and R sd as a function of the FET capacitance C f and insert this into the expression for the ENC above

$$ENC^2 = \dfrac{8}{3}kT \dfrac{{\left( {C_d + C_t } \right)^2 }}{{C_t }} t_{\rm transit} \dfrac{{a_2 }}{\tau }$$

This noise is minimised if we choose the transconductance of the FET such that C t = C d. That can be done by taking an FET with an FET strip of the correct length.

For such an optimised FET the noise can hence be written as

$$ENC^2 = \dfrac{8}{3}\,4kT C_d\,a_2\,\dfrac{{t_{\rm transit} }}{\tau }$$
((8.10))

The above equation represents the noise due to the front-end FET transistor in the amplifier. This equation only holds if this transistor is matched to the capacitance of the detector. In this equation τ represents the rise time of the pulse and t transit the transit time of the charges through the FET channel. This transit time is of the order of 0.25 ns in typical modern FET transistors. We see from the above equation that for an optimised amplifier design the noise is proportional to the square root of the detector capacitance. While the noise caused by real resistances in the system can, in principle, be made arbitrarily small, the noise caused by the FET itself is unavoidable. Equation (8.10), therefore, represents a true lowest possible value for the noise.

A numerical example is instructive: assume a detector with a capacitance of 10 pF, a rise time of the signal of 250 ns and a FET with a transit time of 0.25 ns. From the equation above we have ENC FET = 126 electrons. This is a true lower limit on the noise that can be reached. Carefully designed amplifiers can reach a noise level that is close to this theoretical lower limit, but more often the noise will be several times larger. This means that a signal should be at least several thousand electrons in order to be visible.

The noise expression above only holds if for each value of the detector capacitance one uses the input FET that is matched to this capacitance. For a given amplifier, the noise will be independent of the detector capacitance if it is lower than the FET capacitance and increase proportionally with the detector capacitance if it is larger than the transistor capacitance.

8.6 Shot Noise

An electrical current is a flow of discrete electric charges and not a smooth flow. Therefore any current is unavoidably affected by fluctuations associated with the random arrival of these charges. This noise is called shot noise.

For the sake of definiteness let us consider a simple vacuum photodiode. It consists of an evacuated glass tube with a photocathode on one side of the glass and an electrode facing the photocathode on the other side. A potential difference between the photocathode and the electrode ensures collection of the photoelectrons by the electrode (see Fig. 8.30(a)). If a light source illuminates the photocathode, we will measure a photocurrent. Because of the nature of this set-up, it is obvious that these electrons will be emitted randomly with a Poisson frequency distribution.

Fig. 8.30
figure 30

A vacuum photodiode (a) and its photocurrent (b)

We furthermore assume that the signal produced by each electron is a square pulse of duration Δt and amplitude i = Q/Δt, where Q is the charge of the electron. If a weak light source illuminates the photocathode the photocurrent signal will look like the signal in Fig. 8.30(b). If at any instant in time we measure the current, we will observe a current equal to n i.(Q/Δt), where n i is an integer number. In other words, we always see an integer number of electrons. This integer will be equal to all the electrons that arrived within a given time window of duration Δt. This n i is a random variable with average value <ni>. At any instant in time we have

$$I = n_i \dfrac{Q}{{\Delta t}}$$

The time averaged values of I and n are therefore related by

$$\langle I\rangle = \langle n_i \rangle \dfrac{Q}{{\Delta t}}$$

And the r.m.s. dispersions on the current I and the number of electrons n i are related by

$$ \left\langle {\left( {I - \left\langle I \right\rangle } \right)^2 } \right\rangle = \left\langle {\left( {n_i - \left\langle {n_i } \right\rangle } \right)^2 } \right\rangle \left( {\dfrac{Q}{{\Delta t}}} \right)^2 $$

The random variable n i has a Poisson distribution, hence the r.m.s. dispersion of this variable equals to the square root of its average value

$$ \left\langle {\left( {I - \left\langle I \right\rangle } \right)^2 } \right\rangle = \left\langle {n_i } \right\rangle \left( {\dfrac{Q}{{\Delta t}}} \right)^2 = \dfrac{{\left\langle I \right\rangle \Delta t}}{Q}\, \left( {\dfrac{Q}{{\Delta t}}} \right)^2 = \dfrac{{\left\langle I \right\rangle Q}}{{\Delta t}} $$

This can be seen as a noise current I noise with average value zero, superimposed on steady current <I>. The r.m.s. noise current due to shot noise is therefore given by

$$\left\langle {I_{\rm noise}^2 } \right\rangle = \dfrac{{\left\langle I \right\rangle Q}}{{\Delta t}}$$

We have derived an expression for the noise current in the special case where the pulse corresponding to one electron is a square pulse. To generalise this result to an arbitrary impulse response h(t) we will use a theorem about random sums of random variables. It can be found in a good textbook on statistics. See also Exercise 3.

Theorem: Be xi a number of independent random variables, all with the same probability distribution. Consider the random variable \(S = \sum\limits_i^{1...n} {x_i } \) , where the integer n is itself a random variable with a Poisson distribution with average value λ. The following relations hold:

$$\begin{array}{l} \left\langle S \right\rangle = \lambda \left\langle {x_i } \right\rangle \\ \sigma ^2 \{ S\} = \left\langle {\left( {S - \left\langle S \right\rangle } \right)^2 } \right\rangle = \lambda \left\langle {x_i^2 } \right\rangle \\ \end{array}$$
Fig. 8.31
figure 31

Response of the system to one electron

Consider a time interval T long compared to the shaping time of the pulse. Assume that there is one, and only one, charge in this time interval as is illustrated in Fig. 8.31. The impulse response corresponding to the arrival of this charge is \(Q\,\hat h(t)\,{\rm{with}}\,\int {\hat h(t) = 1}\).

We have for the average current <I> and for the average square current <I 2> the following relations:

$$\begin{array}{l} \left\langle I \right\rangle = \dfrac{Q}{T}\displaystyle\int {h(t) dt = \dfrac{Q}{T}} \\ \\ \left\langle {I^2 } \right\rangle = \dfrac{{Q^2 }}{T}\displaystyle\int {h^2 (t) dt} \\ \end{array}$$

If there are on average λ charges in the time interval T and if the number of charges has a Poisson distribution, we thus have according to the above theorem:

$$\begin{array}{l} \left\langle I \right\rangle = \lambda \dfrac{Q}{T} \\ \\ \left\langle {(I - \left\langle I \right\rangle)^2 } \right\rangle = \lambda \dfrac{{Q^2 }}{T}\displaystyle\int {h^2 (t) dt} \\ \\ = \left\langle I \right\rangle Q \int {h^2 (t)\,dt} \\ \end{array}$$

Therefore the apparent noise current is given by

$$\left\langle {I_{\rm noise}^2 } \right\rangle = \left\langle {(I - \left\langle I \right\rangle )^2 } \right\rangle = \left\langle I \right\rangle Q \int {h^2 (t)\,dt}$$

As before, this can be written as an integral over the transfer function:

$$\left\langle {I_{\rm noise}^2 } \right\rangle = 2 \left\langle I \right\rangle Q \int\limits_0^\infty {|H(\omega )|^2\,d\omega }$$

Shot noise is also white noise! One verifies that the above expression reduces to the result derived previously in the particular case of a square shaping function.

Following the same arguments as before, we conclude that the ENC for shot noise is given by

$$\begin{array}{l} ENC^2 = 2 I Q \dfrac{1}{{h_{\max } }}\int\limits_{ - \infty }^{ + \infty } {h^2 (t)} dt \\ ENC^2 = 2 I Q \tau a_1\end{array} $$
((8.11))

where a 1 is given by Eq. (8.7).

Equation (8.11) does not apply to the resistive current in a normal resistor. The derivation assumes that the shaping time is large compared to the physical formation time of the pulse and for a normal resistor this assumption is usually not satisfied. Indeed, the formation time of the pulse due to one electron is the time it takes for the electron to travel from one electrode to the other and this time is very long in a normal resistor. The speed of electrons in a resistor is of the order of metres per hour! In addition, the assumption that the number of individual electrons has a Poisson distribution is not valid.

Any detector itself contributes to the noise in two ways: by the shot noise associated with the current through the detector and by the thermal noise of its resistance. The shot noise will increase with the current through the detector; therefore, with increasing voltage over the detector at some point, the shot noise will exceed the thermal noise. This happens when the shot noise associated with the current through a detector (Eq. 8.11) exceeds the thermal noise associated with the resistance of the detector (1st term in Eq. 8.7), therefore when

$$\begin{array}{l} 2IQ\,\tau \,a_1 \, > \,\dfrac{{4kT}}{R}\,\tau a_1\\IR\, > \,\dfrac{{2kT}}{{Q}} \approx 50\,\textrm{mV} \\\end{array}$$

The quantity I.R is the voltage over the detector. Therefore the shot noise will be larger than the thermal noise if the voltage over the detector is larger than 50 mV.

The voltage over nuclear detectors is always much larger than 50 mV. Therefore, the dominant noise contribution is the shot noise caused by the dark current, rather than the thermal noise due to the resistance of the detector.

In the derivation above we have assumed that the charge quanta were equal to one electron charge. Often the detector has an internal multiplication mechanism and the charges arrive in multiples of the electron charge. In that case the charge Q to be used in the expression for the shot noise is not the electron charge but the electron charge multiplied by the multiplication factor. If all primary events have the same multiplication factor, all one needs to do is to use the correct charge Q in the above formula. However, often the multiplication factor varies from one event to the next. This gives rise to an additional noise called the excess noise, and is represented by a factor that is usually denoted by F. The concept of excess noise factor was introduced in Sect. 6.4. In this case the expression for the ENC becomes:

$$ENC^2 = 2 I Q \tau F a_1$$

Shot noise will usually be regarded as something undesirable. However, the shot noise can be used to measure physical quantities and this technique is referred to in the literature as the Campbell measuring mode, after the person who developed it. This technique can for example be used when measuring the neutron flux with a proportional tube in the presence of a strong gamma ray background. In this case, the detector signal consists of small pulses of amplitude ‘q’ caused by gamma rays and of large pulses of amplitude ‘Q’ caused by neutrons with Q>>q. If the pulse rate is not too high one can reject the gamma ray pulses using a discriminator at a sufficiently high threshold such as to reject all the gamma ray-induced pulses. If the event rate is very high this is no longer possible. This situation is often encountered when measuring neutron fluxes at nuclear reactors.

The total current I total of the detector is the sum of the current induced by the neutrons I n and the current induced by the gamma rays I γ. Therefore

I n = I total – I γ

The measured total current has to be corrected for a poorly known gamma ray-induced current. This gamma ray-induced current is sometimes much larger than the neutron-induced current, making the measurement essentially impossible.

If we measure the current noise rather than the current itself we are much less sensitive to the gamma ray-induced current. Indeed, this current noise is given by

$$\left\langle {I_{\rm noise}^2 } \right\rangle = (I_\gamma q + I_n Q) 2 \int\limits_0^{ + \infty } {|\hat H(\omega )|^2\,d\omega }$$

From this we can derive the following expression for the neutron-induced current:

$$I_n = \dfrac{{\langle I_{\rm noise}^2 \rangle }}{{2Q\int\limits_0^{ + \infty } {\left| {\hat H(\omega )} \right|^2 d\omega )} }}\,\,\, - \,\,\left(I_\gamma \dfrac{q}{Q}\right)$$

We still need to correct for the gamma-induced current I γ, but this correction is suppressed by a large factor q/Q and we are much less sensitive to any uncertainty in this gamma ray-induced current.

8.7 Summary and Conclusions

In a carefully optimised detector system the two largest noise sources are the shot noise due to the dark current of the detector and the thermal noise associated with the first transistor in the amplifier. Considering only those two terms the ENC can be written as

$$ENC^2 \ge a_2 \dfrac{8}{3}4kTC_d\dfrac{{t_{\rm transit} }}{\tau }\,+\,a_1\,2eI_d \tau$$

There is always some additional noise due to other imperfections in the amplifier or other parts of the electronics, therefore this equation is written as an inequality.

The shot noise is due to the dark current of the detector itself. It increases proportionally to the square root of the shaping time. The noise associated with the first transistor decreases inversely proportional to the shaping time and increases proportionally to the square root of the detector capacitance. At a high rate one needs a short shaping time and this last term dominates the noise.

Figure 8.32 gives an overview of the different noise contributions in a detector as a function of the shaping time. The solid lines in this figure give the noise contribution from the first transistor in the amplifier for different values of the detector capacitance. The detector capacitance is, of course, related to the size of the detector. For detectors in nuclear and particle physics the capacitance ranges from well below 1 pF in some pixel detectors to several μF for calorimeters used in high-energy physics experiments.

Fig. 8.32
figure 32

Typical values for the different contributions to the noise in an amplifier for particle detection. The solid lines represent the noise caused by the first transistor for different values of the capacitance of the detector. A number of other contributions to the noise are also shown. Figure from [1]

Figure 8.32 also shows the shot noise of a hypothetical detector with a dark current of 1 nA. The shot noise associated with the FET leakage current is also shown. This leakage current has the same shot noise effect as the dark current in the detector. We also show the noise caused by the base current of the bipolar transistor, in case one uses a bipolar transistor instead of a FET. The plot makes it clear why it is usually preferable to use a FET. The shot noise associated with an FET is orders of magnitude smaller than the shot noise of a bipolar transistor. However, if the detector capacitance is very large, or the shaping time very short, other noise contributions become dominant and it is preferable to use a bipolar transistor because this is simpler and has a number of other advantages.

If the shaping time is of the order of 1 ms or longer, other noise sources become important. One important type of noise is the 1/f or flicker noise. The term ‘1/f’ stands for ‘one over the frequency’ and this type of noise can have many causes. In an amplifier both capacitances and the resistors contribute to the 1/f noise.

A capacitance without dielectric is noiseless, but practical capacitances have a dielectric medium between the plates. The polarisation in this dielectric is not a purely smooth process and the effect is the induction of some 1/f noise. Resistors also suffer from fluctuations in resistance, generating an additional noise proportional to the current flowing through them. The magnitude of this noise depends on details of the construction of the resistors, but it always has an 1/f frequency spectrum. Typical values are [μV per decade of frequency and per volt over the resistance]

  • carbon composition 0.01 – 3.0

  • carbon film 0.05 – 0.3

  • metal film 0.02 – 0.2

  • wire wound 0.01 – 0.2

In nuclear and particle physics one is rarely interested in a shaping time that is longer than 1 ms, hence these other noise sources are of little concern.

8.8 Exercises

  1. (1)

    Derive property 6 of the transfer function in Sect. 8.2.

  2. (2)

    Calculate the Fourier transform of Eq. (8.4).

  3. (3)

    Prove the following theorem.

    Be x i a number of independent random variables, all with the same probability distribution. Consider the random variable

    $$R = \sum\limits_i^{1...n} {x_i },$$

    where the integer n is itself a random variable with a Poisson distribution with average value l. The following relations hold:

    $$\begin{array}{l} \left\langle R \right\rangle = \lambda \left\langle {x_i } \right\rangle \\ \sigma ^2 \{ R\} = \left\langle {\left( {R - \left\langle R \right\rangle } \right)^2 } \right\rangle = \lambda \left\langle {x_i^2 } \right\rangle \\ \end{array}$$
  4. (4)

    Assume that you are measuring the noise voltage of a resistor using a digital oscilloscope with an input impedance of 10 MΩ and a bandwidth of 400 MHz. For what value of the resistor will you measure the largest value for the noise. How much will this maximum noise be in mV?

  5. (5)

    Consider a silicon strip detector where each strip has a capacitance of 20 pF and a dark current of 20 nA. The rise time of the pulse is 30 ns. Give an under limit for the noise. Take the shape coefficients a 1 = a 2 = 1.

  6. (6)

    Prove that the stationary solutions for a transmission line of length D are given by the following equations

    $$\begin{array}{*{20}c} {\left\{ \begin{array}{l} I_n (x,\,t) = I_n \sin \left( {\dfrac{{n\pi x}}{D}} \right)\sin \left( {\dfrac{{n\pi v_0 t}}{D} + \varphi _n } \right) \\ V_n (x,\,t) = I_n Z_0 \cos \left( {\dfrac{{n\pi x}}{D}} \right)\cos \left( {\dfrac{{n\pi v_0 t}}{D} + \varphi _n } \right)\\ \\ \end{array} \right.} \hfill & {n = 1,..\infty } \hfill \\\end{array}$$