Keywords

The quality of a country’s school system is a crucial determinant of its economic performance and hence of its capacity to enhance social welfare. A well-educated workforce enables a country to produce goods and services requiring highly skilled labor, and if the school system manages to greatly enhance the knowledge level of the most talented and motivated students, this enhancement is likely to promote innovation and technical change, which have been shown to be by far the most important components of economic growth.Footnote 1 However, a reasonably equal distribution of the resulting increases in real income cannot be achieved unless the school system also imparts a great deal of knowledge to students who are not academically gifted or come from underprivileged circumstances.

The Swedish school system from the late nineteenth century to the early 1960s, the “silver age” of Swedish education, did very well on both counts. As suggested in the introductory chapter, this was the major reason why the country progressed from being one of the poorest in Europe in the mid-1800s to being one of the richest countries in the world, with the most equal income distribution of all countries. Therefore, we see the quality of the educational system as the most important factor for the development of Sweden’s social welfare, both historically and in the future. Further support for this view is provided later in this chapter.

This view does not imply that we deny the crucial role played in economic growth by the quality of institutions such as the rule of law, stable property rights, and a high level of generalized trust, which is widely considered to be the prime explanation for cross-country differences in economic performance.Footnote 2 However, good institutions favoring innovation and development cannot simply be imposed on a country. Instead, they evolve as a result of a myriad of economic, political, and noneconomic private interactions among people and are more likely to evolve if the population is well-educated.Footnote 3 In Western-type democracies, the kind of institutions favored by the electorate will of course largely reflect the knowledge level of voters. Hence, in the longer term, if the knowledge level of voters deteriorates, it likely becomes more difficult to develop and maintain high-quality institutions.

Against this background, it is important to confront and analyze the decline in knowledge among Swedish students, as measured primarily by results in internationally comparable tests. We do so in this chapter, which is divided into three sections. We first account for why these tests are important and should be taken seriously. Second, we present the known facts about the deterioration of knowledge in Sweden’s educational system. Third, we draw on other researchers’ estimated effects of results in internationally comparable tests on economic performance to assess how the declining results have affected and will affect the Swedish economy.

We cannot report all aspects. Our main focus is the steep decline in knowledge levels during the first decade of the twenty-first century, but we also discuss the slight increase in results in the most recent international assessments and its likely causes. We focus on older students; the results for the youngest students are mentioned only in passing. The present chapter has a more technical character than some other parts of the book; however, it is crucial that we detail the problems in academic performance before we begin to discuss their causes. The lay reader may jump ahead to the chapter’s concluding section, which contains a summary of the most central aspects addressed here.

Test Results and National Economic Performance

What explains economic growth and, hence, social welfare at the country level? In the 1950s and 1960s, it was widely believed that the growth rate was determined largely by capital investment as a share of GDP. This view was so entrenched that arguably even the most influential economist in the entire postwar period, Paul A. Samuelson, predicted that the Soviet Union would overtake the U.S. in terms of GDP per capita.Footnote 4 However, over time, it became increasingly clear that the share of GDP used for capital investment could explain only a small share of per capita growth, perhaps as little as 10–20 percent. In the 1980s, the theoretical breakthrough in what became known as endogenous growth theorybrought human capital to the fore.Footnote 5 Human capital is the stock of habits, knowledge, and social and personality attributes (including creativity) embodied in the ability to perform labor to produce economic value. In other words, it refers to the skills and personal traits that enhance a person’s ability to produce valuable goods and services.

A problem arose when policy- and opinion-makers began to equate human capital with formal schooling. As a result, governments and international organizations emphasized formal schooling as the means to achieve higher growth and raise real incomes, especially among those with the lowest relative incomes and the weakest labor market positions. Sweden was no exception. For example, in 1999, the Social Democratic government set a goal that “at least half of every age group should have begun studies at institutions of higher learning by 25 years of age.”Footnote 6 A few years earlier, in 1994, the previous distinction between vocational and college-preparatory programs in secondary school had been eliminated; all programs from then on made a student eligible for college or university entrance even if he or she was studying to be a car mechanic, electrician, or nursing assistant.Footnote 7

Large cross-country differences in the average scores in internationally comparable tests (see Box 3.1 for a presentation of the different tests)Footnote 8 among students with the same number of school years provide compelling evidence that it is, in fact, the quality of the educational system that is important. For example, in the 2011 PISA test in mathematics for 15-year-olds, the average score was 611 in Singapore and 425 in Tunisia. This result implies that only one out of every 27 students in Tunisia performed above the Singaporean average.

Box 3.1: Internationally Comparable Tests—An Overview

International assessments of knowledge and skills in certain subjects began to be developed in the early 1960s, with the aim of facilitating comparisons between countries as well as over time. Since the mid-1990s, there have been comparable tests for a large number of countries in mathematics and science, and, since the turn of the millennium, also tests to measure levels and trends in students’ reading comprehension.

There are currently two different organizations that largely test the same kinds of competencies. The International Association for the Evaluation of Educational Achievement (IEA) conducts the Trends in International Mathematics and Science Study (TIMSS), the Progress in International Reading Literacy Study (PIRLS), and the International Civic and Citizenship Education Study (ICCS). The Organization for Economic Co-operation and Development (OECD) conducts the Programme for International Student Assessment (PISA) of mathematics, science, and reading, and since 2003 also tests problem solving.

TIMSS has been repeated every four years since 1995, PIRLS every five years since 2001, and PISA every three years since 2000. ICCS has so far only been carried out twice (2009 and 2016). TIMSS is conducted in grade 4 and 8, as well as at secondary level(for a select group), PIRLS in grade 4, and ICCS in grade 8. PISA is conducted at age 15. In 2012, the OECD also started measuring the literacy and numeracy skills of the adult population in the Programme for International Assessment of Adult Competencies study (PIAAC).

This observation is in line with the empirical evidence. Studies focusing on the effect of the average number of years of formal schooling across countries have not been able to establish any robust effect on economic growth.Footnote 9 An effect, if any, can be found only among developing countries, but it may just as likely be due to reverse causality, i.e., that economic growth triggers more schooling.Footnote 10

In contrast, empirical research shows a strong positive relationship between the results of internationally comparable tests and economic growth. This is especially true for developed countries. Indeed, when the results from such tests are included in analyses aiming to explain economic growth, the estimated effect of the number of years of schooling invariably becomes nonsignificant.

Moreover, the estimated effect is large. The economists Eric Hanushekand Ludger Woessmann conducted a number of studies estimating the effect and its robustness that are synthesized in their 2015 book The Knowledge Capital of Nations. In their analysis of 50 countries during the period 1960–2000, they find that an increase of one standard deviation in the average test results in mathematics and science, i.e., 100 points, “is associated with approximately two percentage points higher annual growth in GDP per capita.”Footnote 11 The estimated effect for just OECD countries is only slightly smaller, 1.8–1.9 percentage points.Footnote 12

One possible reason for the large estimated effect could be that results on cognitive tests are strongly correlated with institutional quality (e.g., rule of law, individual rights, and the quality of government) or even that institutional quality determines the quality of the educational system as measured in international tests. In that case, average test scores function merely as proxies for institutional quality, which, as already mentioned, has been found to be instrumental to long-term growth.Footnote 13 However, the effect remains when Hanushek and Woessmann control for the quality of institutions, although the estimated effect is reduced to approximately 1.3.

Hence, the quality of the educational system, or, more precisely, its ability to impart knowledge to students, rather than the number of years of schooling per se is crucial for economic growth.

We should also point out that the function of a high-quality school system is not just to impart a great deal of knowledge to students but also to instill noncognitive skills, such as self-discipline, perseverance, reliability, and emotional maturity. Indeed, noncognitive skills have been shown to be important not only for productivity and other social outcomes at the individual levelFootnote 14 but also for economic growth.Footnote 15 International tests also capture such skills, suggesting a double-dividend effect of a high-quality school system: A school system that imparts a great deal of valuable knowledge to students cannot succeed in doing so without at the same time teaching them noncognitive skills, such as ability to focus, diligence, and perseverance.

The Swedish economists Gabriel Heller Sahlgrenand Henrik Jordahlupdated Hanushek and Woessmann’s studies, adding results from the PISA and TIMSS tests in mathematics and science through 2015 to explain average GDP growth per capita in the 50 countries for the period 1960–2016.Footnote 16 Thus, this analysis includes the years after the financial crisis of 2007–8. Their results are presented in Fig. 3.1, which shows a strong positive relationship between the test results and economic growth when average years of schooling and initial GDP per capita are controlled for. The estimated effect is substantial: one standard deviation higher average test results—corresponding to 100 PISA or TIMSS points—is associated with an increase in the GDP growth rate of 1.3 percentage points. In contrast, the corresponding calculation between number of years of schooling (controlling for the effect of initial GDP per capita and test results) shows no effect, which is in line with earlier results.

Fig. 3.1
A dot-plot graph of the average growth in GDP per capita between 1960 and 2016 versus the average adjusted test result measured in standard deviations. It includes various dots arranged in increasing order.

(Note Added Variable Plot showing the relationship between average growth in GDP per capita and average test score after having removed the estimated effect of initial GDP per capita and average years of schooling. The values on the x- and y-axes thus indicate the difference between the actual results and what is projected by the two control variables. SourceHeller Sahlgrenand Jordahl [2021])

The relationship between international test results and growth in GDP per capita, 1960–2016

After having documented the crucial importance of average test results on long-term economic performance, it is worth exploring whether a large share of high achievers is more important or whether it makes more sense to try to push as large a share of students as possible above a minimum threshold. We follow Hanushek and Woessmann in defining the share of students who attain a basic minimum level as those who score at least 400 points—i.e., one standard deviation below the OECD average—in the PISA and TIMSS tests. The share of high-performing students is defined as those who attain at least 600 points—i.e., one standard deviation above the OECD average.

Based on the updated data by Heller Sahlgrenand Jordahl (2021) used in Fig. 3.1, this analysis shows that both shares are important for economic growth, but the share of high-performing students is substantively more important. While an increase of 10 percentage points in the share of students who achieve basic skills is associated with an increase in the GDP growth rate of 0.18 percentage points, an equal increase in the share of high-performing students is associated with an increase of 0.87 points. The latter effect is displayed graphically in Fig. 3.2.Footnote 17

Fig. 3.2
A dot-plot graph of GDP per capita growth between 1960 and 2016 versus the share of high-performing students, with various dots, plotted between the graph.

(Note Added Variable Plot showing the relationship between average growth in GDP per capita and average test score and the share of high-performing students after having removed the estimated effect of initial GDP per capita, average years of schooling and the share of students attaining a result of at least one standard deviation below the OECD mean (400 points). The values on the x- and y-axes thus indicate the difference between the actual results and what is projected by the three control variables. SourceHeller Sahlgrenand Jordahl [2021])

The relationship between the share of high-performing students and growth in GDP per capita, 1960–2016

There are several reasons why the share of high-performing students is especially important to economic development and social welfare. High scorers are more likely to support growth-friendly policies and institutions and to hold politicians accountable for abuse and malfeasance.Footnote 18 A highly educated population is generally more likely to resolve disputes through negotiations and informed democratic decision-making than through violent conflict and coercion.Footnote 19 Other reasons why high scorers are particularly important are that they tend to save more, be more cooperative, be more innovative and more successful at using highly productive team-based technologies (e.g., open-heart surgery and corporate finance), and be more prone to imitate and adopt productive behaviors and solutions used by others.Footnote 20

In summary, there are significant cross-country differences in the test results, and those differences are associated with large variations in long-term growth rates in GDP per capita. The question of what factors drive such differences is one that we return to in later chapters; in a sense, it is the topic of the entire book. Next, we closely examine Sweden’s results.

Sweden’s Downward Trajectory

Since the 1960s, there has been intense debate about the quality of Swedish education. Views on how well the educational system works and whether the trend is upward or downward have differed widely. One reason for these diverging assessments is differing beliefs concerning the mission of school as an institution and, as a result, the type of results that should be measured. Another reason is that no metrics have ever been developed that enable educational performance in Sweden to be measured over time,Footnote 21 either before or after the transition to the nine-year unity school system in the early 1960s. As noted by Erik Lidström, a researcher on educational reform, “precious few measures were obtained when the old system was wound down.”Footnote 22 It was not until the international knowledge assessments started in the run-up to the 2000s that Sweden was able to properly measure its own students’ performance and skills. Comparisons with other countries could then also be made, and as the same tests were repeated several times, it became possible to compare trends over time.

As the following sections show, Sweden initially performed well in these tests. However, a significant decline in results then followed. Here, we highlight key aspects of the TIMSS and PISA results and present results from three other tests that shed further light on the development: the PISA assessment of creative problem solvingin 2012; OECD’s PIAAC study, which measures the literacyand numeracy skills of the adult population; and the results of diagnostic tests in mathematics taken by new students starting engineering programs at Chalmers University of Technology in Gothenburg.

TIMSS

As shown in Fig. 3.3, the first year that Sweden participated in the TIMSS, Swedish 8th graders performed far above both the international average and the EU/OECD average in both mathematics and science. However, between 1995 and 2011, the Swedish average results deteriorated by 56 points, which was the largest decline among all participating countries.Footnote 23 In the 2015 TIMSS cycle, Sweden’s average result improved by 17 points. However, because the EU/OECD average also improved, Swedish 8th graders still performed well below that average. In 2019, the Swedish average improved by two points (not a statistically significant change). However, although the average was roughly unchanged, dispersion increased compared to 2015. Moreover, absenteeism increased by three percentage points from 2015 to 2019 (from 6 to 9 percent), and the exclusion rate increased from 5.4 to 6.3 percent.

Fig. 3.3
A multi-bar graph illustrates the score versus the year from 1995 through 2019. The year with the highest bar for Sweden and the E U forward slash O E C D average is 1995.

(Source Mullis et al. [2004a, 2008a, 2012a, 2020])

Sweden’s and the EU/OECD’s average score in TIMSS Mathematics from 1995 to 2019 for students in grade 8

Figure 3.4 shows the corresponding development in the science assessment. In 1995, Swedish students distinguished themselves even more in science than in mathematics, with an average result exceeding the EU/OECD average by as much as 37 points. In three subsequent assessments, the results fell continuously, although they remained above the mathematics level. The 2015 increase of 13 points in the average score once again made the Swedish average somewhat higher than the EU/OECD average. The average remained unchanged in 2019, while dispersion increased; i.e., the results improved at the top of the distribution and declined among the weakest students.

Fig. 3.4
A multi-bar graph illustrates the score versus the year from 1995 through 2019. The year with the highest bar for Sweden is 1995, and the E U forward slash O E C D average is 2015.

(Note Sweden did not participate in the TIMSS 1999 cycle. Source Mullis et al. [2004b, 2008b, 2012b, 2016b, 2020] and Martin et al. [2016])

Sweden’s and the EU/OECD’s average score in TIMSS Science from 1995 to 2019 for students in grade 8

The TIMSS study also shows the percentage of students who achieve various proficiency levels in the tests. To reach a certain proficiency level, students need to achieve a certain number of points and to solve certain tasks that are specifically designed to measure their understanding of mathematics. The TIMSS defines four levels of proficiency: low (400–474 points), intermediate (475–549 points), high (550–624 points), and advanced (≥625 points).

Additional insight into Sweden’s performance can, therefore, be gained by reviewing how many Swedish students achieve a certain level and how these percentages have changed over time. In 1995, the percentage of students who attained the advanced level was 12 percent. This share fell sharply to three percent in 2003; in 2011, only one in 100 students attained the advanced level, and the percentage improved somewhat in 2015, to one student in 35. The share of students who did not reach even the lowest level (<400 points) more than doubled from four to ten percent between 1995 and 2019.

To obtain a clearer picture of the extent to which Sweden has fallen behind, we now compare the results of Swedish students with the results of the top societies, as measured by the share of students who achieve the advanced proficiency level. In TIMSS Mathematics, these societies are Taiwan, Singapore, South Korea, Hong Kong, and Japan. Figure 3.5 shows how the share of students who achieve the advanced proficiency level has changed in these societies and their performance in relation to Sweden. It is particularly noteworthy is that the figures are of an entirely different order of magnitude and that it is not possible to identify any downward trends in any of the other societies except for a slight dip for Taiwanand South Korea in 2015.

Fig. 3.5
A line graph has values from 0 to 50 versus the years 1995 through 2019 with six trend lines that have many highs and lows. Singapore has the highest, and Sweden has the lowest trend line.

(Note Sweden did not participate in the TIMSS 1999 cycle. Source Mullis et al. [2012a, 2016a, 2020] and Swedish National Agency for Education [2016b])

Percentage of students in Sweden and the top-five societies at the advanced proficiency level in 8th grade in TIMSS Mathematics, 1995–2019

Sweden and the U.S. are relatively similar in terms of culture, level of education, and economic development. However, because of the well-known weaknesses in the U.S. education systemFootnote 24 and the existence of privately funded schools, we should expect not only a considerable variation in results among U.S. students but also that the weakest students will perform particularly poorly. In fact, Table 3.1 shows that the weakest U.S. students (defined as the fifth percentile in the distribution) performed significantly better than the weakest Swedish students on the TIMSS Mathematics in 2011. In 2015, the weakest U.S. and Swedish students performed identically. In all other percentiles, U.S. students outperformed Swedish students, and the difference widens as one moves upward in the distribution. In contrast, Swedish students outperformed their U.S. peers across the entire distribution in 1995, and the Swedish advantage was larger in the lower half of the distribution. However, this pattern changed in the 2019 assessment. Although the U.S. average of 517 was significantly higher than the Swedish average, the weakest U.S. students once again did considerably worse than the weakest Swedish students. For 2019, the U.S. results consistently exceeded the Swedish results from roughly the 27th percentile and onward. At the 95th percentile, the gap increased to as much as 45 points in the U.S. favor.

Table 3.1 Comparison between the U.S. and Sweden on the TIMSSMathematics in 1995, 2011, 2015, and 2019, disaggregated by percentile points

TIMSS Advanced

The International Association for the Evaluation of Educational Achievement also conducts comparative studies, called the TIMSS Advanced, of educational achievement at the secondary school level (Mullis et al., 2016b). In this study, final-year secondary school students are tested in mathematics and physics. Sweden has participated in three assessment rounds––in 1995, 2008, and 2015––and it is possible to make fair comparisons between them.

The tested groups consist of students who have completed secondary school courses with a particular focus on mathematics and physics. Indeed, participants in the TIMSS Advanced belong to a highly select group of students who attend the most demanding secondary school programs. Sixteen countries participated in the two specialist studies in 1995. However, some of the participating countries did not meet the criteria deemed necessary to allow their national results to be reliably compared to other countries’ results. Ten countries met the comparability criteria in mathematics, and 11 countries met them in physics.

Swedish students performed well in the 1995 specialist mathematics and physics studies. Franceand Switzerland were the only two countries that performed significantly better than Sweden in specialist skills in mathematics. Sweden’s average score was 512 (compared to the international average of 500). In physics, Sweden performed at the top, along with Norway.Footnote 25 Sweden’s average score was 573 (compared to the international average of 500).

The TIMSS Advanced was conducted again in 2008.Footnote 26 Only ten countries participated, in comparison with the 1995 cycle, in which 16 countries participated. Sweden, together with Russia, Slovenia, and Italy (in mathematics) and Russia, Slovenia, and Norway (in physics), was one of the few countries that participated in both cycles. However, according to a report by the Swedish National Agency for Education, the percentage of final-year students participating in the study decreased sharply in Sweden between 1995 and 2008. In other words, there were far fewer Swedish students studying advanced mathematics and physics courses at secondary school.

The difference in Sweden’s scores between the two assessment rounds was dramatic, both in physics and in mathematics. The average score in mathematics fell by 90 points. This was by far the largest decline among all countries (Italy was second, with a decline of 34 points). The Swedish decline in physics, 81 points, was almost as large. After Sweden, Norway suffered the greatest deterioration in physics, with a decline of 47 points. The Swedish score in mathematics recovered somewhat in 2015 (although the score was second from the bottom), while it continued to fall in physics. Figure 3.6 shows how the performance of Swedish students changed between 1995 and 2015. The dark column shows average mathematics scores, while the light column shows average physics scores.

Fig. 3.6
A multi-bar graph has scores versus the years 1995, 2008, and 2015. The highest bar for mathematics and physics is in the year 1995.

(Note TIMSS Advanced was not done in the TIMSS 2019 cycle. SourceSwedish National Agency for Education [2009, 2016a] and Mullis et al. [2016b])

Sweden’s average scores in mathematics and physics in TIMSS Advanced in 1995, 2008, and 2015

Hidden behind this decline in advanced knowledge is a wide dispersion in terms of the share of students at different proficiency levels. The levels are defined in exactly the same way as in the TIMSS studies for 8th grade students, with the only difference being that the lowest level is excluded. The proficiency levels are, therefore, defined as follows: below average (≤ 474 points), intermediate (475–549 points), high (550–624 points), and advanced (≥ 625 points). Table 3.2 shows how these percentages changed in mathematics and physics between 1995 and 2015.

Table 3.2 Percentage of Swedish students achieving various proficiency levels in TIMSS Advanced in mathematics and physics in 1995, 2008, and 2015

The table shows a dramatic drop in the percentage of students who achieved both the highest and the second-highest levels. Only one in 100 students attained the advanced level in mathematics in 2008, and this number increased to one in 50 in 2015. The percentage of students who attained the advanced or high levels in mathematics fell from 30 to eight percent from 1995 to 2008, i.e., a decrease of almost three-quarters. In physics, the share of students who achieved the two highest proficiency levels was as high as 66 percent in 1995 and was more than halved, to 30 percent, in 2008. The percentage who failed to achieve the intermediate level in mathematics doubled between 1995 and 2008, and despite a small uptick in 2015, two-thirds of students in their final year of secondary school, taking courses with a science and technology focus, still did not attain the intermediate level in mathematics. In physics, 54 percent, i.e., more than half of the students, failed to achieve the intermediate level—a sevenfold increase compared to 1995.

PISA

Swedish 9th graders have participated in all PISA assessments since the tests began in 2000. We begin by presenting the period of decline in Sweden’s PISA results and then present the results of the two most recent assessments. We do so for two reasons: (i) our main interest here is the period of decline in knowledge results in the early 2000s, and (ii) there is reason to doubt the comparability of the results of the 2015 and 2018 assessments with those of earlier assessments.

Mirroring the developments observed for the TIMSS and the TIMSS Advanced, Swedish students performed above the international average in the first PISA cycle, but after that, and as shown in Fig. 3.7, Sweden’s results steadily deteriorated in all three PISA areas––reading, mathematics, and science––until a low point was reached in the 2012 survey. The Swedish average score was well below the OECD average, and in each area, only three OECD countries performed worse than Sweden.

Fig. 3.7
A line graph has values from 475 through 520 versus the years from 2000 through 2012, with 3 declining trend lines labeled as reading, science, and mathematics.

(SourceSwedish National Agency for Education [2001, 2004a, 2007, 2010, 2013a] and OECD [2014b])

Sweden’s average scores in PISA in each respective subject area, 2000–2012

As Fig. 3.8 shows, performance fell across the entire distribution compared to 2000, when Swedish students outperformed the OECD average across the entire distribution. The decline in mathematics was most significant for high-performing students, while the decline in science and reading was largest for low-performing students. To gain a sense of the magnitude of the decline among low-performing Swedish students, we note that as late as 2006, this group scored 17 points above the OECD average in reading, while six years later, it scored 35 points below. Sweden’s overall decline in science and reading relative to the OECD during the 2000–2012 period can thus be attributed mainly to the low-performing group, while for mathematics, the fall is disproportionately explained by high-performing students doing worse.

Fig. 3.8
A line graph has values from 0 to 50 versus the years 1995 through 2019 with six trend lines that have many highs and lows. Singapore has the highest, and Sweden has the lowest trend line.

(SourceSwedish National Agency for Education [2013a])

Sweden’s results as percentiles compared with the corresponding percentiles for the OECD average, 2000–2012 (score difference relative to the OECD)

The decline in science was also significant for high-performing students. The score of students in the 95th percentile went from eight points above the OECD average in 2000 to 6 percent below this average in 2012.

With respect to reading, students across the entire distribution greatly outperformed the corresponding OECD average in 2000, and more so at the lower end of the distribution. The change in the ensuing years was dramatic. The students in the 5th percentile went from scoring 20 points above the OECD average to scoring 35 percent below in 2012, while the students at the top end declined by only six points relative to the OECD average. Thus, while students at the top end still performed above the OECD average, students below the 90th percentile of the distribution performed below the OECD average, and the weaker the students were, the more they lagged behind the corresponding OECD average.

To gain further insights into students’ proficiency, both PISA and the TIMSS study the percentage of students who attain a certain score; e.g., all students who score at least 625 points are considered to have attained the advanced level and so forth. The Swedish National Agency for Education compared the percentage of students who scored above a certain predefined level (out of a possible six) in 2012 compared to 2003. This comparison is presented in Table 3.3. It shows that the share of students performing at the two highest levels was halved from 16 to eight percent. The percentages attaining Levels 3 and 4 also decreased. The share of students who did not achieve Level 2 increased by ten percentage points, from 18 percent in 2003 to 28 percent in 2012. A large proportion of those students did not even achieve the lowest level; the share that did not achieve even Level 1 almost doubled from six to 10 percent.

Table 3.3 Percentage of Swedish students achieving different proficiency levels in PISA mathematics in 2003 and 2012

As discussed at the beginning of this chapter, research shows that the performance at the top of the distribution is of central importance to economic growth. Table 3.4 compares the scores of the five percent of highest-performing students in 2003, when the top Swedish students performed best in mathematics and science, to their scores in 2012. The decline was greatest in mathematics and science, by 35 PISA points, while it stopped at eight points in reading. The decline, relative to the OECD average, was considerably higher in mathematics (35 points) than in science (13 points) because the OECD scores in science also fell but remained unchanged in mathematics. In 2012, the top Swedish students were still slightly better than the OECD average in reading, but when the OECD average remained the same between the two assessments, the scores of Sweden’s highest-performing students in reading also declined relative to the OECD average.

Table 3.4 Averages for Sweden and the OECD for the top five percent of students in PISA 2003 and 2012, and changes in Sweden’s scores relative to the OECD’s average between the two years

How this group fares in Sweden can be further illuminated by comparing the average results of the five percent of the best-performing students to the results for the corresponding group in countries where the top five percent of students, on average, obtained 700 points or more in each subject.Footnote 27 Table 3.5 shows the results of this comparison. The gap between Sweden and the best countries was greatest in mathematics, where there was a difference of 85 PISA points against South Korea, which was at the very top. Even in European countries, such as Belgiumand Poland, the best students performed extremely well compared to their Swedish counterparts. In science, Finlandand Japan shared first place with a gap of 49 points between them and Sweden. In reading, there were only a few countries where the top 5 percent scored an average of 700 points or more, and the gap between Sweden and the top-performing countries was clearly smaller than in mathematics and science.

Table 3.5 Countries with an average of at least 700 PISA points among the top five percent in each respective subject in 2012

The standard deviation of the scores among the top 5 percent of Swedish students was almost exactly 30, which means, for example, that only 16 percent of the top 5 percent of Swedish students scored better than the OECD average for the top 5 percent in mathematics in 2012. The average score for the top 5 percent was almost three standard deviations higher for South Korean than for Swedish students. Provided that the results are normally distributed, this means that only one in 400 students in Sweden reached that level. Similarly, only one in twenty of the top Swedish students scored better in science than the average for the corresponding top students in Finland.

Later Results and Comparability to Previous PISA Assessments

As Table 3.6 shows, Sweden’s average results improved in all three areas in PISA 2015. The extent of the improvement was such that the average results almost exactly reverted to the level in PISA 2009. At the same time, the OECD average fell slightly, which improved Sweden’s results relative to the OECD somewhat more. The results also improved somewhat in 2018, which meant that the overall results returned to the 2006 level except in science.

Table 3.6 Sweden’s and the OECD’s average PISA score in 2012, 2015, and 2018

From 2012 to 2018, the share of students at the two lowest performance levels decreased from 27 to 18 percent in mathematics, from 23 to 18 percent in reading, and from 22 to 19 percent in science. The share performing at either of the two highest levels increased by roughly eight to 13 percentage points in reading and mathematics and eight to nine percentage points in mathematics.Footnote 28

An examination of the results of Swedish students across the entire performance distribution relative to the corresponding percentile results in the same pattern as that shown in Fig. 3.8. Except in mathematics, where the curve is relatively flat with just a minor dip for the weakest students, the curves slope steeply upward. In other words, Swedish students now perform slightly better than the OECD average, but this improvement is driven by an above-average performance of students at the higher end of the distribution, while students at the lower end of the distribution continue to lag.

Thus, ostensibly, it appears that Sweden has recovered a large part of the decline that lasted until 2012. There are, however, some serious caveats. The comparability of the 2015 and 2018 results with results from earlier years is impaired by the fact that beginning in 2015, the PISA tests were taken on computers and not by using paper and pencil. This difference may have affected the results. In fact, this possible difference has been suggested in research on country results in mathematics in PISA 2012.Footnote 29 The results often differed markedly at the country level when the regular test was compared to a computer-based test that was tried on an experimental basis for the first time. For example, the Polish average fell by 28 points, while the Swedish average improved by 12 points among students who took the test on a computer compared to those who used paper and pencil. The largest difference was found for China-Shanghai, where the results were 50 points lower in the computer-based test. The reading test was also administered on an experimental basis on computers, and the Swedish results improved by 15 points.

Countries that use computers a great deal in the classroom appear to have been favored by the change. Strikingly, several high-performing societies dropped sharply in PISA 2015. Hong Kong dropped 32 points in science, while Taiwan’sand Japan’s reading results declined by 26 and 22 points, respectively. Moreover, South Korea dropped 30 points in mathematics.Footnote 30 There were also large declines in several European countries in some areas. For example, the results in science fell by 15–25 points for Ireland, Poland, and Germany. These changes are extreme and correspond to between 50 and 100 percent of what an average student learns in one school year.Footnote 31 Thus, it is highly unlikely that these large drops reflect an actual deterioration in students’ knowledge and skills. These declines are of the same order of magnitude as Sweden’s decline during the twelve-year period from 2000 to 2012.Footnote 32 The fact that there was no corresponding decline in the TIMSS further supports the presumption that the change to computer-based testing affected the results and that the decline was not a genuine decline.

The Swedish National Agency for Education has acknowledged that the use of computers may have affected Sweden’s results in PISA 2015, writing that “Swedish students performed relatively better on the digitally administered tests both in 2009 and 2012 relative to the usual PISA tests.”Footnote 33 Andreas Schleicher, who organizes the PISA tests, has also admitted that the results of the 2015 cycle may not be fully comparable with those of previous assessments.Footnote 34

These indications of bias have not been contradicted by more recent evidence. On the contrary, in his analysis of the OECD’s field study in which students were randomly allocated to either a computer- or a paper-based (identical) PISA test, John Jerrim, an educational researcher at University College London, showed that the use of computers did affect the validity of cross-country comparisons.Footnote 35 Students in Ireland, Sweden, and Germany performed considerably worse when they took the tests on a computer. The differences were greatest in Germany, followed by Ireland and Sweden. When Jerrim used the OECD’s official method to control for differences between the two test methods, the differences remained to some extent, but countries were affected differently: While the results for students in Ireland and Germany were still 11 and 19 points lower, respectively, the difference was eliminated for Sweden. This finding indicates that Sweden may have benefited in relative terms, at least compared to Ireland and Germany, but to draw a more definitive conclusion, analyses of a larger number of countries are called for.

Another factor that may have played a part in Sweden’s improvement in PISA 2015 is external pressure on the students to perform well on this particular test. The dramatic decline in the previous round of PISA in 2012 produced a political and cultural shock in Sweden. Improved results in 2015 were, therefore, widely perceived as crucial to the future of the country’s educational system, and it is unlikely that students were unaware of the significance of PISA. This view is also supported by the fact that students’ motivation to do well on the PISA test increased considerably after the 2012 test.Footnote 36

In the 2018 cycle of PISA, yet another change was made to the test. The OECD introduced a (crude) form of computer-adaptive testing, meaning that students did not take the same test.Footnote 37 Students who answered the first questions correctly received harder questions as the test moved forward, and weaker students received easier questions. This makes comparability with the old pen-and-paper tests even more difficult. Indeed, in a blog post published on the eve of the release of the 2018 results, John Jerrim urged caution and noted that “we should be keeping a close eye on how this change to the test design has been handled (and if comparison to previous PISA results is really possible).”Footnote 38

Against this background, it appears that the OECD is not emphasizing comparability over time to the extent that it once did. The purpose of the PISA test also seems to be changing. According to a news report, Andreas Schleicher and the OECD are making sure that PISA “reflect[s] changes in what is considered to be important in education,” which means “moving away from traditional knowledge testing.”Footnote 39 In the view of Schleicher, “the modern world doesn’t reward us for what we know, we can get that from Google.”

Most importantly, Sweden excluded one-ninth of its population of 15-year-olds (11.1 percent) in the PISA 2018 test,Footnote 40 which was the highest exclusion rate of all OECD countries and double the rate in PISA 2015 (5.7 percent). The average exclusion rate in the OECD in 2018 was 4 percent. This increase in the exclusion rate is believed to have been due to a large increase in immigration. However, the exclusion rate is likely to be approximately five percentage points too high, and in Germany, which also had a large influx of immigrants, the exclusion rate was a mere 1.9 percent. Moreover, absenteeism was extremely high among those selected to participate at 13.5 percent.Footnote 41 As a result, almost one-fourth of the population of 15-year-olds were either excluded or absent. There are strong indications that those who were erroneously excluded or absent would, on average, have performed poorly.Footnote 42 Even fairly modest assumptions regarding the likely average results of those students are sufficient to erase the improvement in average results from 2015 to 2018.Footnote 43

Thus, in summary, we do not believe that the PISA tests in 2015 and 2018 are fully comparable with the tests conducted between 2000 and 2012. Therefore, they may not reflect an actual substantive change in students’ academic achievement.

The PISA Assessment of Creative Problem Solving

As the objective of PISA is to assess how well national education systems prepare students for higher education and future working life, it is also appropriate to test their creative problem-solving skills. This can be seen, moreover, as a test of students’ ability to practically apply the formal knowledge that is measured in the tests in mathematics, science, and reading. In 2003, for the first time, a separate section was included in the PISA study that was designed to test students’ problem-solving skills. It was omitted again in the assessments conducted in 2006 and 2009 but reintroduced in PISA 2012. The test questions are based on everyday problems that are not directly linked to school subjects but indirectly require good academic knowledge. In 2012, the test was computer based, while in 2003, it was taken using paper and pencil. The results for these two years are, therefore, not strictly comparable. We choose to focus on the 2012 study, as it tested students’ creative skills more extensively.Footnote 44

The problem-solving skills of Swedish students are interesting and relevant in the context of Sweden’s decline in academic knowledge. In the past, poor levels of substantive knowledge among Swedish students have often been excused by arguing that this problem was offset by students’ strong performance in other important aspects, such as creativity. For example, the chairman and the vice president of one of the largest corporate school groups in Sweden claimed that PISA 2012 did not show the whole picture of students’ strengths because it did not measure creativity.Footnote 45 However, this claim is contradicted by the aforementioned assessment of problem-solving skills. Such skills are indeed measured by the OECD, and the results are not encouraging.

As we are unable to compare Swedish students’ problem-solving skills over time, we instead analyze average scores in comparison with the OECD average. The average OECD score in the problem-solving section is 500 points; Sweden’s score was 491, i.e., below average. In fact, Sweden was ranked 20th out of 28 participating countries. Table 3.7 shows the countries that performed better. In addition to the 19 OECD countries, the list includes Singapore, Taiwan, and China. It is noteworthy that countries that excel in academic knowledge are clearly ranked at the top in this test as well.

Table 3.7 Countries with better scores than Sweden in PISA’s computer-based problem-solving test in 2012

The test distinguishes between static and interactive tasks. In static tasks, students are given all the information needed to solve the problem from the outset. In interactive tasks, however, students are required “to uncover useful information by exploring the problem situation” in order to be able to solve the problem.Footnote 46 When both parts are taken into account, the total percentage of correct answers for Swedish students is 43.8 percent, which is lower than the OECD average of 45 percent. When the percentage of correct answers is divided between static and interactive tasks, the distribution of results changes: Swedish students complete static (and less demanding) tasks better than interactive tasks, performing 0.6 percentage points higher than average on static tasks but 2.2 percentage points lower than average on interactive tasks.Footnote 47 A greater level of self-control and critical and inventive thinking is needed to complete the interactive tasks than to complete the static tasks. Sweden’s score in this test, therefore, does not support the claim that Swedish students’ poor results in tests of pure knowledge could be offset by their superior creative or critical thinking skills.

Finally, we examine how well OECD countries’ scores correlate across the three subjects and problem-solving skills. Table 3.8 shows the correlation coefficients for the countries’ rankings in each subject. Rankings in the different subjects show a strong positive correlation.

Table 3.8 The correlation between the different OECD countries’ PISA rankings in the four subjects that were tested in 2012

Other Relevant Comparisons

In this section, we complement our analysis of Swedish elementary and secondary school students’ competencies by reviewing trends in the competencies of the adult population. This enables us to analyze how performance in childhood and youth “translates” into performance in adulthood.

With the help of the international PIAAC comparison, we present evidence strongly suggesting that the knowledge and skills of the adult population have deteriorated over time. We also show that deficiencies in students’ knowledge at the compulsory level during the 2000s have a direct negative impact on the knowledge and skills that students have when they reach adulthood.

Alongside the PIAAC, we present a summary of how university students’ prior knowledge of mathematics has changed. Diagnostic tests conducted with students starting engineering courses at one of the country’s two largest technical universities, Chalmers University of Technology in Gothenburg, show that the level of knowledge in mathematics in this group began to fall steeply in the early 1990s.

PIAAC

The Programme for International Assessment of Adult Competencies (PIAAC) was developed to provide decision-makers with information about the skills and abilities of the adult population.Footnote 48 PIAAC tests individuals between 16 and 65 in several areas, such as literacy, numeracy, and problem solving, in technology-rich environments.Footnote 49 The results are presented according to age group. Sweden is one of 23 countries participating in the test.

As PIAAC has been carried out only once, in 2012, we cannot compare how the skills of the adult population have changed over time. However, it is possible to see how the students who participated in various PISA assessments progressed as they grew older. The groups of students who participated in the four PISA tests can be identified by the following age groups in PIAAC:

  • PISA 2000: 26–28 years old in PIAAC

  • PISA 2003: 23–25 years old in PIAAC

  • PISA 2006: 20–22 years old in PIAAC

  • PISA 2009: 17–19 years old in PIAAC

As PIAAC assesses numeracyand literacy, we can compare these two areas with PISA. Figure 3.9 shows comparisons of the average results of the different age groups. The results appear to mirror Sweden’s trend in the PISA assessments almost perfectly. Swedish students who participated in PISA 2000 performed well in both reading and numeracy in PIAAC twelve years later. Swedes who took the PISA in 2003 achieved the best results overall. For the 20–22 and 17–19 age groups, i.e., those who took the PISA tests in 2006 and 2009, the average scores were lower, indicating that they were not able to compensate for deficiencies in knowledge at the elementary level through increased learning at a later stage. (This finding applies to both literacyand numeracy.) Indeed, as noted in the Long-Term Survey of the Swedish Economy, poor results at age fifteen “remain unchanged at least twelve years after elementary education.”Footnote 50

Fig. 3.9
Two line graphs of scores versus the years from 2000 through 2009 with two declining trend lines labeled as Sweden and P I A A C average.

(Source OECD [2013b])

Average scores for Swedenin reading and mathematics and PIAAC averages for the age groups 26–28, 23–25, 20–22, and 17–19

The students who participated in PISA 2012, i.e., those with the worst PISA scores both absolutely and relative to other countries, did not participate in the PIAAC study. However, given the strong correlation between PISA scores and PIAAC scores, there is ample reason to believe that this cohort of students will perform weakly in a future PIAAC study.

Sweden’s scores for different age groups in PIAAC can also be compared to the corresponding averages for all 23 participating countries.Footnote 51 Proficiency in numeracy is relatively higher in Sweden for older people. In literacy, the positive differential relative to the average for all countries appears between the youngest group and the 35–44 age group. After that, to Sweden’s advantage, the differential remains constant. This finding suggests that the decline in mathematics, relative to other countries, began earlier than the decline in literacy. Those who attended lower and upper secondary school in the 1970s (the 45–54 age group) were already relatively weaker than those who went to elementary and secondary school in the 1960s.

The Diagnostic Test in Mathematics at Chalmers University of Technology

One indicator of the quality of Swedish elementary and secondary education is the prior knowledge of new students starting at university. Comparable tests in mathematics have been conducted every year since the early 1970s (beginning in 1973) with students starting engineering courses at Chalmers University of Technology. This provides us with an excellent opportunity to study proficiency levels earlier than in the 1990s.

The test was designed and administered for almost 40 years by Senior Lecturer Rolf Pettersson and is now administered by Associate Professor Jana Madjarova. The test consists of nine standard questions taken from a bank of 30 questions. We have direct access to the results for the whole 1973–2019 period for two parts of the test: logarithms and quadratic equations.Footnote 52 The latter are taught in elementary school and have also featured in PISA tests. Mathematics courses including logarithms are taught in secondary school and are a prerequisite for admittance to Chalmers’ engineering program.

Figure 3.10 shows the percentage of correct answers to questions involving logarithms and quadratic equations. Between 1973 and 1991, the percentage of correct answers for logarithms hovered around 50 percent. In 1994, the scores began to fall steeply, and this decline continued for the rest of the 1990s. From 2001 on, the scores flattened out and stabilized at a level of 20 percent. A slight increase of approximately three percentage points can be discerned in the last five years, but it also remained within the range of normal variation after 2000. For quadratic functions (elementary school knowledge), the percentage of correct answers lay at around 75 percent until 1993. Then, it followed the same steep decline as for logarithms. The series hit its lowest point in 2004 at 45 percent. Looking closely, we can identify a rise of a few percentage points in the following years, although the trend did fluctuate. The fact remains that the scores need to improve by approximately 50 percent to return to the levels of the early 1990s.

Fig. 3.10
A line graph has values from 10 through 80 versus the years from 1973 through 2018 with two trend lines labeled as quadratic equations and logarithms that have many highs and lows.

(Source Pettersson [2015], and updates received directly from Associate Professor Jana Madjarova, who succeed Rolf Pettersson as responsible for Chalmers’ diagnostic tests in mathematics)

Percentage of correct answers in the section on logarithms and quadratic equations in Chalmers’ diagnostic tests in mathematics, 1973–2019

An analysis of all subsections shows that, except for quadratic equations and polynomial division, the percentage of students who answered correctly was reduced by at least half from the 1970s and 1980s to the early 2010s.Footnote 53 The average percentage of correct answers in 2015 was 25 percent.

An average score may hide a significant dispersion in results among the new students. Table 3.9 shows the percentage of students about to begin studying in the Biotechnology, Chemical Engineering, or Chemical Engineering with Physics program who achieved a certain number of correct answers in the 2013 test. The table shows that a large percentage of the students had very poor scores. Almost one in five students did not solve a single problem correctly, and almost exactly half of the new students had a maximum of 1.5 correct answers out of a possible nine. A very small percentage had very good results. Only one in nine students (11.3 percent) solved at least half of the problems; only one student out of 186 solved more than seven of the nine problems; and a mere 6.5 percent of the students earned a score that exceeded the average from 1973 to 1993. Not a single student solved all of the problems correctly.

Table 3.9 Percentage of students that get a certain number of correct answers in Chalmers’ test for new students studying Biotechnology, Chemical Engineering, or Chemical Engineering with Physics in 2013

To be eligible for a place in one of the engineering programs, students must have completed a science- and technology-based program in secondary school and taken the course Mathematics Level D (at least). To be admitted on the basis of their school-leaving grades, as most students were, the following admission points were required for these programs in the fall semester of 2013: 19.95, 18.20, and 18.80, respectively.Footnote 54 With few exceptions, places were granted only to those with the highest grade in all subjects in secondary school. Despite the fact that the students had high formal qualifications, their scores on the diagnostic tests were poor. This suggests that the relationship between a student’s grade and his or her actual level of knowledge is tenuous, an issue that we explore further in the next chapter.

The Macroeconomic Effects of the Knowledge Decline

In the middle of the nineteenth century, Sweden was among the poorest countries in Europe. Approximately 80 percent of the population was engaged in the agricultural sector. An improvement began in the 1850s, and in the early 1870s, industrialization provided a base for sustained economic growth that continued largely uninterrupted for one hundred years. Swedish productivity growth was exceptional in the period 1870–1950 compared to that in other rich countries.Footnote 55 It is fair to say that this would not have been possible without a high-quality education system that resulted in rapid human capital formation.

In the first section of this chapter, we presented evidence of a strong positive effect of school quality as measured in internationally comparable tests on economic growth. Based on the most up-to-date data, one standard deviation higher in the results—i.e., 100 points in TIMSS and PISA—in mathematics and science is associated with an increase in the growth rate of GDP per capita of 1.3 percentage points.

Based on this estimate, how much can we expect the recent decline in the results of Swedish students to affect economic growth? To arrive at an approximate answer to this question, we focus on the declining results in mathematics and science in TIMSS since 1995. As shown in Table 3.10, the results declined sharply from 1995 to 2011, recovered somewhat in 2015, and remained at that level in 2019. The average decline in mathematics and science from 1995 to 2015 was 34.5 points, or 0.345 standard deviation. Thus, based on the estimated effect, this decline can be expected to reduce the average growth in GDP per capita by 0.45 percentage points (0.345 × 1.3), which is a substantial effect.

Table 3.10 Average results in TIMSS Mathematics and Science in Sweden in 1995, 2011, and 2015

We also documented that the estimated growth effect was substantially larger if the share of high-performing students increased relative to an increase in the share who attained a basic minimum knowledge level. We can use those estimates to calculate the estimated positive growth effect if Swedish students managed to attain the same proportions in the basic and advanced levels, respectively, as Singaporean students. In PISA 2015, 80 percent of Swedish students attained the basic level and 10 percent attained the advanced level. The corresponding shares for Singapore were 91 and 26 percent, respectively. If the share attaining the basic level increased to 91 percent, the growth rate would be expected to increase by 0.2 (1.1 × 0.18) percentage points, while an increase from 10 to 26 percent in the share attaining the advanced level would be associated with an increase in the growth rate of GDP per capita of a whopping 1.39 (1.6 × 0.87) percentage points. Even if we should take the estimate with a grain of salt, it clearly shows that there is much greater growth potential in increasing the share of top-performing students.

In Sum

It may come as a surprise to international readers that no metrics have ever been developed to measure educational performance in Sweden over time before the tests that began in the mid-1990s that are comparable internationally and over time. Sweden performed relatively well in the earliest assessments conducted around the turn of the century. Swedish secondary school students then performed significantly better than the average in both PISA and the TIMSS.

In the 2000s, Swedish elementary and secondary school students’ scores began to fall in every subject and assessment but one, which suggests both a long-term and substantial weakening of the Swedish school system.

This downward trend in attainment is a result of deteriorating scores across the board, from the highest-performing students to those who obtain the lowest scores. The decline in reading literacy and science in PISA is particularly great among those who perform the least well.

At the same time, it is important to emphasize that a significant part of the decline has been driven by a deterioration in the performance of the very best students. In PISA Mathematics for 15-year-olds, for example, the relative decline in performance is greatest for the highest percentile group. It is also evident that a smaller percentage of Swedish students are achieving higher proficiency levels and that the scores of the top 5 percent of students are declining. The decline among the top five percent was particularly large in mathematics; in 2012, the Swedish scores were a full standard deviation below the average of the top five percent in the OECD, and only a tiny proportion of Swedish students achieved the average obtained by the top five percent of the best-performing countries.

The TIMSS measures trends in mathematics and science achievement in the fourth and eighth grades. In TIMSS 2011 and 2015, even U.S. students, both at the top and at the bottom, performed better than Swedish students in mathematics, in contrast to eighth grade scores in 1995, when Swedish students performed better than U.S. students across the whole distribution.

Poor scores in the cognitive tests in PISA and the TIMSS are not offset by good results in the computer-based PISA assessment of creative problem-solving skills in 2012. Sweden lies below the OECD average and is in 20th place out of the 28 OECD countries. Singapore, Taiwan, and China also perform a great deal better than Sweden. These three countries are at the top, together with Japanand South Korea.

The indication that there has been a large decline in knowledge among the top students is reinforced by the TIMSS Advanced, a study that measures the level of knowledge in mathematics and physics of final-year upper secondary students specializing in technology and science. Sweden was the top-performing country in the first comparison in 1995. In the 2008 study, Swedish scores in mathematics and physics fell sharply, both absolutely and relative to other countries. Only one student in one hundred achieved an advanced level in mathematics, and 71 percent did not attain the intermediate level.

The level of knowledge that students have at age 15 significantly affects the level they attain when the same group is tested again as adults. This effect is demonstrated in the differences in scores between various age groups in the OECD’s PIAAC survey of adult skills and in a comparison of PISA and PIAAC scores for the age groups that took both tests.

Diagnostic tests in mathematics taken by new students at Chalmers University of Technology provide an opportunity to gain an understanding of how performance in mathematics has evolved since the early 1970s. The results of these tests show large differences in students’ levels of prior knowledge depending on when they attended elementary and secondary school.

A large proportion of new students at Chalmers University of Technology obtain extraordinarily poor scores in the diagnostic test in mathematics despite high school-leaving grades. This suggests that the final-year grades are not a good measure of the level of knowledge they have attained. Second, it shows that it is possible to spend 12 years at a Swedish school, the last three years of which are spent specializing in mathematics and the natural sciences in secondary school, obtain high grades—and still, have limited mathematics proficiency.

Although there is a considerable lag, the decline in knowledge among Swedish students is likely to have strong effects on future economic growth. A rough calculation based on updated cross-country estimates suggests that the Swedish growth rate per capita may fall by 0.4–0.5 percentage points. Furthermore, the positive growth effect is expected to be substantially larger if the share of students attaining the advanced level is increased compared to an equally large increase (in terms of percentage points) in the share who attain the basic minimum level.