The triumph of ionic refraction facilitated an even subtler epistemic transformation in radio propagation studies. While the old atmospheric reflection had built predictions and explanations on the geometry and very few material parameters of a featureless, homogeneous Kennelly-Heaviside layer, ionic refraction resorted to a much more structured, heterogeneous, and hence interesting atmosphere. The skip zone and other short-wave irregularities resulted from the upper layer’s height, thickness, and electron-density profile or from the geomagnetic rotations of radio waves within it. This assertion had a flip side: the physical characteristics of the ionized atmosphere could account for various wave propagation phenomena, and those phenomena revealed the structure of the upper layer, too. The work of short-wave researchers in the early 1920s prepared the ground for a change of focus from the behaviour of radio waves using the upper atmosphere as an explanatory tool to the properties of the ionized atmospheric layer using radio waves as a probing instrument. Propagation studies were beginning to evolve into atmospheric physics. Yeang (2012, p. 146).

Pawsey’s initial foray into research was concerned with an area of applied science –investigations into how “atmospherics” (electrical disturbances in the atmosphere) and ionospheric turbulence affected radio communications. In this chapter we explore the general intellectual background to this science. We return to our account of Pawsey’s development in Chap. 6.

This chapter draws from research in the history and philosophy of science, and in particular a recent history of early ionospheric research by Chen-Pang Yeang (2012). We note in footnotes where analogies and connections exist to later events in radio astronomy. In this summary, we discuss the interplay between “pure” and “applied” science, and we explore what a history of science might look like, if it paid particular attention to the instruments that scientists used. We are interested in where scientific ideas come from, and in how some ideas might occur simply because a scientist was familiar with certain sorts of instruments and not others, or because a scientist was primarily interested in improving an aspect of instrument design.

This story is relevant to Pawsey, who, in 1930, was beginning research that could lead to a career in industry as easily as in basic science. It is a story of how immediate, practical problem solving (such as how to obtain clearer reception of radio signals) generated broad, conceptual questions, such as how to understand radio wave propagation in a turbulent ionosphere. Conversely, an investigation of the structure or dynamics of the ionosphere, or studies of radio wave propagation, could and did unexpectedly address practical issues in radio communications.Footnote 1

The Beginnings of Radio

Radio communications research began in the mid 1890s: for context, this was not long after the very first Professors of Physics (at Sydney in 1887) and “Natural Philosophy” (at Melbourne in 1889) had arrived in Australia to set up their new Departments, and just 14 years before Pawsey’s birth. The pace of change in science can be gauged by considering the difference of a single generation, from Guglielmo Marconi (creator of radio communications) to Joe Pawsey. In this generation, the world moved from a time when “science” was still very much the domain of wealthy “amateurs”—particularly in Australia (Moyal, 1986)—to one where science was conducted by professional scientists in companies and Universities.Footnote 2

Was being able to take more risks an advantage of the amateur era? If so, the origins of radio are a case in point. In 1894, Guglielmo Marconi (1874–1937), the 20-year-old son of minor Italian nobility and educated, as was still common then, at home, spent hours in his room trying to create “wireless telegraphy”; that is trying to send telegraph messages without wires, using the recently discovered “Hertzian” (radio) waves. At that time, natural philosophers (physicists) considered Hertzian waves would be essentially the same as light waves, and physicist Oliver Lodge (1851–1940) (Gregory & Ferguson, 1941; Wilson, 1971) had predicted the maximum transmission distance would be a half mile. But by using a recently invented device, a coherer, which changed resistance when exposed to radio waves, Marconi was able to build a wireless storm alarm, a device that received radio waves generated by lightning, and then transmitted a signal across his attic room to ring a small bell. Later (in 1895–6), outside, and using a grounded receiver and transmitter and a higher monopole antenna, he transmitted radio waves for two miles, and over hills. From there it did not take long for Marconi to begin shipboard experiments—wired telegraphy was of course entirely useless for moving ships at sea—and to pursue, not research in physics, but a global radio communications company. In 1901 he famously transmitted a radio signal from Poldhu, Cornwall, in Great Britain, to Newfoundland, Canada (Fleming, 1937).

What mechanism could explain how a signal transmitted in Wales could be received in North America despite the curvature of the earth, which should have blocked the radio wave since it travelled only in a straight line? Between 1902 and 1919, there were two different concepts to explain how transatlantic radio wave propagation could occur: surface diffraction, and atmospheric reflection (later, refraction). The first explained radio wave propagation around the earth as the result of multiple diffraction of radio waves over the edges of cliffs and other terrestrial features. The second explained transatlantic radio wave propagation as a result of being reflected from a hypothesised entity in the upper atmosphere, i.e., the ionosphere. The first (surface diffraction) remained the dominant focus in academic radio research for almost two decades, even though the second (atmospheric reflection) was intuitively accepted by most radio engineers from the very early years.

It is intriguing to consider why this difference in view between academic physicists and radio engineers persisted for such a long time. Following Yeang, we suggest that the reason included factors such as: the constraints of particular instruments; different preferred research styles; and the influence of different people (physicists and engineers) and institutions. These factors would later arise in radar research and radio astronomy.

1902–1925: Surface Diffraction—A Productive Research Program Based on an Incorrect Premise

Why did surface diffraction remain of interest, when it could not explain radio propagation phenomena well known to annoyed radio operators, such as fading, static and diurnal variations in signal? That surface diffraction persisted as a research program reflected, in part, the dominance of mathematical physics among the researchers who worked on it in Cambridge (and elsewhere in the UK), France and Germany.Footnote 3 They prized theory, suggests Yeang, not for the breadth of empirical information it could account for, but instead for its “elegance”, that is, its logical consistency and accessibility in form.

Researchers in these centres tended to investigate physical problems that could be formally represented, usually in terms of differential equations with boundary conditions that represented the physical circumstances of the problem (in this case transatlantic propagation and antenna directivity). They would then develop various mathematical techniques to solve them.Footnote 4 Thus, the focus of research soon became a mathematical question: “what was an accurate approximation of the diffracting field’s intensity above a large conducting sphere?” (That is, these researchers became less focused on finding a direct answer to the question of how long distance radio wave propagation could be explained.)

The research program that resulted sought proper approximations of the diffracting field’s analytical form and debated the legitimacy of these approximations. Due to the lack of available instruments and infrastructure, the surface diffraction theorists had virtually no data on which to test their theories for more than a decade. But contrary to expectation, when empirical data became available and a formula developed to express it (we tell this story below), the formula did not resolve the debates over their approximating theories, because the empirical regularity had the wrong wavelength dependence. Mathematical research in surface diffraction continued anyway—new mathematical tools for dealing with approximations of diffraction series or integrals were being developed, of interest for their own sakeFootnote 5 (Yeang, 2012, p. 106).

1910–1919: The Austin-Cohen Formula: Discarding Anomalous Data

In the early 1910s, the US Navy was able to finance tall transmitting towers and receivers and to equip its ships with radio communications equipment. The Navy then began conducting propagation experiments in order to test how well the equipment worked. As a result, the first empirical data that could be used to test ideas about radio wave propagation became available. Two engineers, Louis Austin and Louis Cohen, developed a formula in 1910–1911 that could serve as a useful approximation for the measured values recorded in these experiments. They measured a well-defined characteristic of transmitters (antenna current) and aimed to represent it through a simple mathematical formula that fit with the framework of surface diffraction, which was the dominant theory of the time. In the process, Austin had to decide what to do with the nighttime data. Given that radio signals often behaved differently between day and night (for instance travelling much farther at night), it was, of course, too variable to fit his calculations. He simply discarded it. He faced a similar question when the formula consistently produced values that were too high for distances of more than 200 km. Austin’s decision was again to discard the anomalous data, by invoking the assumption that the discrepancy resulted from the atmospheric absorption of energy, an assumption that fitted with simple absorption laws elsewhere in physics.

These kinds of judgements, made in order to resolve apparent anomalies, look incorrect in hindsight; they were shortly to be explained by the features of the ionosphere.Footnote 6 But at the time the formula, and the models with which it was designed to fit, were convincing, because they were coherent with the knowledge of the day, and they continued to produce useful research. In this case, by discarding anomalous data, Austin and Cohen were able to develop a formula that combined transmitting—antenna current, height of transmitting antenna, height of receiving antenna, and wavelength—with a previous long-distance transmission formula. It seemed to “work”. The Austin-Cohen formula was enormously useful to scientists. It provided the only quantitative and empirical basis for understanding long-distance propagation at that time. From that time, researchers focused their concerns on whether their predicted numerical results fitted the formula, rather than examining whether their theories fitted physical intuition, or whether they fit with wireless engineers’ knowledge of how their instruments functioned.

The Austin-Cohen formula was compelling to wireless engineers, too, since, as Yeang remarks, what physicists saw as a law for testing mathematical theory, engineers viewed as a reliable design rule: it stipulated the quantitative dependence of incoming signals’ strength on antenna height, distance, transmitter power, and wavelength, meaning that engineers could design antennas to provide a minimum signal-level over a given distance. As a result, the Austin-Cohen formula had huge engineering consequences. Because it predicted longer propagating distances at longer wavelengths, the builders of long-range wireless stations lowered their operating frequencies as much as possible and erected giant antenna towers to have their signals reach wider areas.

The irony was that this paradigmFootnote 7 became obsolete as soon as it consolidated. The principle reason was that by the end of WWI, radio amateurs and engineers found that short waves (ie 1.7–30 Mhz, or 10-180 m) could also propagate over very long distances, and with just moderate transmitting power.

Hypothesising an “Ionosphere”

From the early twentieth century, wireless operators were preoccupied with several phenomena that significantly impacted radio communications. One was the fact that the maximum distance a signal could travel varied between night and day time, travelling much farther at night. Another was “static”—clicks, grinding sounds, hissing noises—which often interfered with incoming transmissions. Static was a more serious problem at night, during summer, and in low altitude settings. Radio operators also observed that there was a strong but unexplained association between static and storms and other meteorological events (Yeang, 2012, p. 85), an issue that was to become the topic of Pawsey’s Masters research.Footnote 8 A third issue was that Marconi’s antennas needed to be tilted to generate optimal transmission.

Surface diffraction explained none of these facts. But all of them, together with the phenomenon of long distance radio wave propagation around a curved earth itself, could be explained intuitively by the idea that radio waves were reflected back to earth from the atmosphere. As a result, the concept of a reflecting “layer” gained wide acceptance among wireless engineers early in the twentieth century, even as mathematical physicists were developing theory for surface diffraction.

The Idea of Atmospheric Reflection, 1902

The history of research into this layer, which came to be termed the “ionosphere” after 1930, is repeatedly illustrative of how “pure” scientific investigation of its structure and characteristics shaped, and were shaped, by practical concerns with improving electrical and communications technologies. Even the very concept of the ionosphere had its origins in technological development. The concept of a radio wave reflecting layer in the atmosphere was first published in 1902 by two Britons, separately: physicist Arthur Kennelly (1861–1939) and former telegraph operator Oliver Heaviside (1850–1925). Heaviside was a practical man who invented and patented the coaxial cable. He also taught himself James Clerk Maxwell’s 20 equations, and then, to make them available for practical use, simplified them down to the four commonly used today (Buchwald, 1985). He used these equations to predict the existence of an ionised layer in Earth’s upper atmosphere (Nahin, 1987). In his model, the earth and atmosphere were conceived rather like a large-scale coaxial cable: a conductor with concentric boundaries of (1) the earth and (2) a hypothesised upper atmosphere layer. Radio waves might propagate over long distances by reflecting from these boundariesFootnote 9 (Yeang, 2012, p. 88).

Of course much radio wave behaviour did not fit this simple model. William Eccles’s (1875–1966) 1912 hypothesised that the hypothetical layer formed when sunlight (radiation) broke apart molecules in the upper atmosphere and produced free ions and massive neutral particles, and that radio waves were refracted by these ions rather than simply reflected from a layer (Ratcliffe, 1971). This provided a plausible explanation for diurnal and seasonal variations in signal transmission and for achieving optimum transmission only with tilted aerials.Footnote 10

Investigations into radio wave propagation involving the ionosphere, which became investigations into the characteristics of the ionosphere itself, formed the context for Pawsey’s initial research. It is useful to note, that the potential for using radio-wave interference for finding the direction to sources of radio emission was obvious to many who had been involved in ionospheric research.

Direction-Finding Equipment and the Existence of the Ionosphere

The first empirical evidence for the hypothesised layer was found by a man who was both a Cambridge trained theoretical physicist and a London-trained engineer, T.L. Eckersley (1886–1959). Pawsey’s research style would similarly combine engineering skill with theoretical insight. In order to give the reader a sense of the equipment then in use, the difficulties that scientists and engineers were only just beginning to understand, and of how ideas were connected to tinkering with it, we now briefly recount what Eckersley did.

A popular early wireless direction-finder was the rotating loop antenna, or the “frame aerial” (the Bellini-Tosi system), which determined a radio wave’s propagating direction by rotating the vertical receiving loop around the vertical axis until the detected signal strength was minimum. However this, and other early direction-finding systems had many problems, including that they had direction errors of up to 40° and that they became erratic at sunset and fluctuated through the night. These night difficulties persisted despite the steep improvements in antenna loops, goniometers (devices for the precise measurement of angles), rotating mechanisms, and tube amplifiers that were generated by WWI. In the latter part of the war (1916–17), Eckersley, then stationed in the Mediterranean, set out to improve the equipment by demonstrating that that the observed errors were not generated internally, but by interference from waves returning (reflected, refracted or diffracted) from the sky.Footnote 11 He designed three experiments to disentangle the mixed polarisation of “ground” and “sky” waves, basing them all on designs for direction-finders. When the results of the three experiments were compared, they showed that sky waves were present and were the cause of the observed errors; by corollary, they indicated, but did not prove, the hypothesised Keneally-Heaviside layer must be real and could be studied by measuring polarisation. This “pure” research also indicated how direction-finding devices could be improved: by designing them to cope with “sky waves”.

Thinking with Equipment: Adapting Direction-Finders to Investigate “Sky Waves”

If Eckersley’s experience had demonstrated that searching for sky waves could improve direction-finders, National Radio Laboratory (UK) engineer Reginald Smith-Rose (1894–1980) with assistant R. Barfield found that looking for improvements to direction-finders, could also find evidence for sky-waves (Oatley, 2004). In 1925 (after Appleton’s famous confirming experiment, below) they experimented with the Adcock direction-finding system, which located positions using phase-detection instead of signal strength; Pawsey would later use this instrument as well.Footnote 12 The new design controlled the direction errors within 1°, which was not only an unthinkable improvement on direction-finding accuracy since Eckersley’s wartime work 8 years earlier, but also provided confirmation that the major source of errors in the loop-type direction-finders was interference from sky waves, since these were eliminated in their adapted Adcock system (Yeang, 2012, pp. 212–214).

While this demonstrated additional evidence for the existence of sky waves, it also did not indicate anything about the characteristics of the Kennelly-Heaviside layer, whose existence had been confirmed the year before.

Sir Edward Appleton, the Frequency-Change Method and the Magneto-Ionic Theory of the “Ionosphere”, 1924

Conclusive evidence for the existence of an ionosphere was famously provided in December 1924 by E.V. (later Sir Edward) Appleton (1892–1965) and his colleague Miles Barnett (1901–1979, originally from New Zealand) (Gabites, 2000). Appleton was eventually awarded a Nobel prize for this work. This research emerged from, and was made possible by, the start and rapid expansion of commercial radio broadcasting in 1922. This meant that powerful continuous broadcasters became available for the first time.Footnote 13 Appleton was sponsored by the (newly formed) British Radio Research Board to investigate fading. By this time the idea that fading resulted from interference between ground and sky waves was widely accepted. But there was still no direct experimental evidence for the existence of the Kennelly-Heaviside layer, or for its hypothesised cause (ionisation from solar radiation), or characteristics (for example, that it would show variations in height and ion density, which would in turn cause variations in radio wave propagation).

This direct evidence was supplied by Appleton and Barnett by creating an innovative method of artificial fading. The BBC allowed Appleton and Barnett to use their Bournemouth sender. “The method adopted has been to vary the frequency of the transmitter continuously through a small range and attempt to detect the interference phenomena so produced between the two rays” (Appleton & Barnett, 1925). This method was known thereafter as the “swept frequency” or “Appleton frequency-change” method. With this continuous scan in frequency the difference in distance travelled by the “ground” and “sky” waves, when measured in the number of wavelengths, changes, so the combined signal cycles through periods of cancelation (fading) or reinforcement. When all the frequencies used are combined there is only one delay for which they all reinforce.

This frequency scanning interferometer provided direct evidence for the existence of the ionosphere—and additionally, a direct measurement of its height. As Appleton and Barnett wrote in 1925:

These effects may be explained in a general way if an atmospheric reflecting layer is postulated which is comparatively ineffective for the waves of this frequency during the daytime but bends them down very markedly at night. According to this view two rays arrived at the receiver at night, one nearly along the ground, which may be called the direct ray and the other returned from the atmosphere, and called the indirect ray … If we assume the simplest interpretation of these interference phenomenon and regard them as analogous to those of a Lloyd’s mirror fringe system , [our emphasis] the effects may be viewed as follows … The experimental observations … indicate a path difference of order 80 kilometres, which is consistent with a reflecting layer of about 85 kilometres …

Thus the 1925 Appleton and Barnett set-up can be viewed as a precursor of the sea-cliff interferometer of 1946 (Chap. 13).Footnote 14

With a scanning monochromatic signal, Appleton’s scheme was described by Ratcliffe (1974a, p. 2095):

The first experiments [of Appleton & Barnett, 1925] were designed to be as simple as possible. A BBC [CW] transmitter, whose frequency could be slowly varied, was used after the end of the normal transmissions at midnight. Reception was at a distance where the ground- and sky-waves were expected to be roughly equal, the receiving apparatus was of the simplest type, and the signal variations were observed on an ordinary table galvanometer. The expected “fringes” were obtained and were counted to give a measure of the virtual height of reflection.

The height was determined using a simple equation based on two or more adjacent frequencies that produced maxima (or minima) in the fringe pattern. The determination of the two wavelengths could be used to derive the virtual height of the reflecting layer.

Ratcliffe (1974a, p. 2096) continued as he described the pioneering results from 1925 as interferometry, succinctly describing the Appleton-Barnett frequency change method in the terms instantly recognisable to radio astronomers:

… [T]he strength of the wave received at a distance of about 100 km from a CW transmitter was observed while the frequency was slowly changed. The observed signal fluctuated between maxima and minima as the phase difference between the sky- and ground-waves altered, and, by analogy with similar optical phenomena, the fluctuations were called “fringes”. If the “amplitude” of the fringes was to be large, the sky-wave should be roughly equal to the ground-wave.

Appleton and Barnett’s empirical verification of the existence of the Kennelly-Heaviside layer—soon termed the “ionosphere”, a word invented by Sir Robert Watson-Watt in 1926 and widely taken up from the early 1930s (Gillmor, 1976)—was of course quickly followed by deepening understanding of its formation and properties and consequently, of radio wave propagation phenomena.Footnote 15 Its composition of course fluctuated diurnally, and its structure and characteristics, resulting from the electrical characteristics of the movements and collisions of charged particles, and affected by the earth’s (fluctuating) magnetic field, quickly turned out to be much more complex than previously considered. The ionosphere turned out to be both layered (Appleton could distinguish, not one, but several “layers”) and turbulent. The “magneto-ionic” model of the ionosphere was dominant by the mid-1920s, and the “frequency change” method was established as an active experimental method to better understand it.

Connections to Cambridge and London: How the Magneto-Ionic Paradigm Generated a Research Program in Australia, 1929–1939

Appleton’s research provided the context for Pawsey’s Masters and PhD projects. It was Appleton who found early empirical evidence to connect atmospherics with thunderstorms and other electrically excited weather processes by working with a Cavendish Laboratory researcher who had been conducting cloud-chamber experiments to mimic thunderstorms—an example of the cross fertilisation made possible by the Cavendish’s size and breadth. And of course the research of Appleton and his students and colleagues—who included the young J.A. “Jack” Ratcliffe (1902–1987), Pawsey’s PhD supervisor—created a new research program in ionospheric studies. Ionospheric physics grew exponentially from 1926 to 1938 with a doubling time of 3.2 years.Footnote 16

This research program strongly influenced the research program at the Radio Research Board in Australia, as discussed in Chap. 4. Twenty-four percent of the Australians started their ionospheric career in the UK. For instance, Radio Research Board scientist A.L. Green had worked with Appleton and Ratcliffe on the polarisation of downcoming radio waves and found them to be elliptically polarised in the left-handed sense. Since the magneto-ionic theory predicted that similar measurements made in the southern hemisphere would show a right-handed polarisation, Green set up in Jervis Bay shortly after his arrival in Australia to verify this prediction experimentally. In 1930 he was able to announce by telegram that (in his own words): “sky waves received from 2BL are approximately circularly polarised, as was the case in England, but that the sense of rotation is on the contrary right-handed … and it forms the final link in the chain of proof of the Eccles-Larmor-Appleton magneto-ionic theory” (Evans, 1973, p. 170).

We have already mentioned, in Chap. 4, how greatly David Martyn’s research during the 1930s was shaped by, and contributed to, the techniques and discoveries of this early period of ionospheric research, including identifying a layer between the E and F layers of the ionosphere (with Cherry), perfecting an adapted “pulse-phase” technique (see below) with Munro and Piddington, and providing the theory of the Luxembourg effect (with Bailey). In the period 1925–1960 Australia was ranked fourth in the world for research output in ionospheric physics, an achievement due significantly to Martyn.

An American Contribution: The “Pulse-Echo” Method for Ionospheric “Sounding”, 1925

In 1925, not long after Appleton and Barnett’s frequency-change method was devised,Footnote 17 Americans Gregory Breit (1899–1981) and Merle Tuve (1901–1982) generated a mathematically identical “pulse echo” technique. In this method, the researcher sent a train of pulsed waves skyward and used the time difference between the transmitted pulse and its reflected “echo” to both demonstrate the existence of the ionosphere, and to measure its height.Footnote 18 The technique turned out to be superior in some respects because all the frequencies in the short duration pulse are present at the same time. No errors can be introduced by changes in the ionosphere during the frequency sweep, and multiple layers are simply seen as multiple pulse echoes. Whereas the frequency-change method required a day for a trained expert to measure the height from an oscillogram, the pulse-echo device provided a compelling, immediate, visual reading for the height.Footnote 19 When Breit and Tuve found that the observed ionospheric “height” varied, and that what they measured was a “virtual” rather than real height, new questions about what dynamics and structure existed in the ionosphere came into view.

We note here that despite the enormous success and wide uptake of the pulse-echo method for ionospheric sounding, including a new round of improvements to instrumentation to develop automation (Yeang, 2012, calls this “mechanised objectivity”), and the simple, real-time visual display of signals, no significant ionospheric research program developed in the USA after Breit and Tuve turned their attention to nuclear physics in the late 1920s. But that was not to be the end of the story. Merle Tuve would later take up research in radio astronomy at the Carnegie Institution of Washington—playing a role in the USA that was very similar to Pawsey’s in Australia. Indeed, Tuve became a very important colleague, friend and supporter of Pawsey’s, as we will see.

The storyFootnote 20 of early radio and ionospheric research lets us understand more about how scientists come to be able to think about or approach research questions—influenced by their workplaces, the norms of how research questions might be designed, their experience with different kinds of instruments, their exposure to other researchers in different areas. This offers us more insight into the intellectual context of ionospheric and radio research that Pawsey was about to enter, and in which he would form his own views on the nature of science.