3.1 Overview of the Approach Taken in This Study

Chapter 2 demonstrated that we are not looking at a completelvy unknown phenomenon—much knowledge exists about the challenges of very large public projects. We do not need to go out in the field and document phenomena that have never been seen before, proving that they have systemic causes and are not just idiosyncratic anecdotes. The existing work suggests that very large projects are complex social systems, the success drivers and challenges of which are roughly known but which are very difficult to manage because their specific instances interact and change over time. Moreover, not all the drivers are always relevant, but it is important to understand which are critical in specific situations. In other words, we are trying to identify the most important issues that go wrong in the specific Nigerian public sector context and how one might correct these issues.

A good method to test existing theoretical (causal) knowledge would be the careful statistical comparison of project characteristics from archival databases. If we compare thousands of projects with respect to success and the absence or presence of challenges and success drivers, we can use statistical methods to finely distinguish which success drivers make a difference and which do not. However, we have already pointed out that large-scale project data is simply not available in Nigeria, neither from government sources nor from accessible journalistic sources.

Therefore, we need to create our own database of projects. One good way of doing this is a survey—asking people who are involved in large projects to answer questions about the known success drivers (Creswell , 2009). Comparing the responses across projects enables us to test whether the identified success drivers actually make a difference. Indeed, this is one method that we have used: we asked 3 different respondents from each of 20 completed projects and 20 abandoned projects to respond to a questionnaire (and we obtained answers from all 3 respondents of 38 of the 40 targeted projects). We describe the way in which we carried this out in the next section of this chapter.

Questionnaires have limitations—even if each respondent fills out the questions with someone sitting across the table helping them (thereby reducing problems of sufficient effort and common interpretation), predefined questions only capture certain types of information, possibly missing additional issues that did not fit the assumed structure of the problem. Therefore, we added a second method by writing detailed case studies, “telling the causal stories” of what actually happened for 11 of the 38 surveyed projects. Ten cases comprise paired stories of a completed and an abandoned project in the same sector, and the eleventh case is the only steel plant in the sample, Ajaokuta, which has cost the country a phenomenal amount of money ($5B and counting) without ever having produced a single ton of steel, and on which a previous case study already exists, which we shall revisit. We describe the way that we conducted the case studies, using a combination of interviews complemented by independent desk research from public sources, in the last section of this chapter.

3.2 Construction and Execution of the Survey

Questionnaires represent a useful method to test existing knowledge (or theories). They offer a number of advantages. We discuss these advantages, as well as their disadvantages, and how we used our design to limit these disadvantages (Popper , 1959; Rattray & Jones, 2007; Taylor & Bogdan, 1998; Grant & Wall, 2009). The strengths of the questionnaire method are as follows:

  • The quantitative data generated can be used to test existing knowledge and theories and their hypotheses (this is called the “positivist view”, which holds that data can be “objectively” described and quantified).

  • Questionnaires are practical; they can collect large amounts of information from a large number of people in a short period of time and in a relatively cost-effective way.

  • Once the questionnaire is done, the research can be carried out by a group of people without compromising its validity and reliability, provided the questionnaire is well designed in a way that is not “subjective” but well-grounded in existing knowledge or theory.

  • The results of the questionnaires can be quickly and easily quantified (“coded”) by the researchers with the help of software packages.

  • The resulting quantified data can be analysed more “scientifically” and objectively than qualitative research, and it can be used to compare and contrast results with results from other research (here, the qualitative case studies).

  • Questionnaires can assure anonymity and thus allow respondents to be open. This was particularly important in this context, where people felt exposed by the size and visibility of the projects and were willing to speak only if it was guaranteed that their identities would be protected.

The disadvantages of questionnaires are as follows (we outline how our design attempts to limit the disadvantages):

  • Phenomenologists assert that questionnaires (and quantitative research more generally) are artificial creations by the researcher, asking for limited information without explanation (as opposed to qualitative research, which asks for the “full richness” of participants’ experiences—this is the opposite of the positivist view). Thus, questionnaires lack validity. Our response is that asking for the “full richness” of experience naturally carries its own biases (Where are the interviewees “led”?), and if existing explanatory theory is available, the “full richness” is wasteful because it will contain so many irrelevant details that the relevant core issues may be lost in the noise. If the questionnaire is carefully designed based on the existing professional knowledge (as described below), it is not artificial, and it has validity.

  • There is no way to tell how truthful a respondent is being or how much thought a respondent has put in. We addressed these dangers by (a) asking three respondents from each project to fill out the questionnaire, that is, three people representing different parties in the project; this goes at least part of the way to preventing partial views and partisan information distortion and moving towards objectivity; (b) having an associate sit down with each respondent and leading them through the questionnaire, answering questions about interpretation and making sure that nothing was glossed over.

  • The respondent may be forgetful or not thinking within the full context of the situation. This is true, but this holds for all personal (non-archival) forms of data collection, and it is again at least partially addressed by the multi-respondent strategy.

  • When developing the questionnaire, the researcher is making his/her own decisions and assumptions about what is, and is not, important. Therefore, they may be missing something that is important; also, some forms of information may not fit the theoretical lens of the questionnaire (such as emotions or tribal customs) and thus be overlooked by the pre-specified questions. This is again true, and this is the reason why we chose a mixed method combining the questionnaire with detailed case studies.

Here, we describe how the questionnaire was designed and executed. We started with the extended project management framework that concludes Chap. 2. These are the success drivers that 40 years of previous work have identified as professional knowledge about very large projects. We went through the following steps:

  1. 1.

    We decided to forego quasi-“archival” numerical measures, for instance, “the number of stakeholder complaints successfully negotiated”. Such measures, when not routinely available as standard content from IT systems, take inordinate amounts of effort to obtain or estimate (if they can be obtained at all). In order to keep the effort for the respondents within acceptable limits, we decided to use “Likert scale” questions of the type “To what extent do you agree with the following statement (1 = not at all, 4 = neutral, 7 = strongly)?” Likert scale answers are quantifiable and can be (and routinely are) used as quantitative answers, and they can be answered by respondents on the spot, using their knowledge of the context. They are less precise than IT-based archival numbers, and they may invite respondents to give biased answers. However, we addressed this worry by asking three respondents from each project.

  2. 2.

    We translated each of the 48 constructs in the project management framework into possible “measures” that one would be able to request in a questionnaire (Hinkin , 1998; Ghiselli et al. 1981); for example, the “clear vision” construct was expressed with measures such as the extent to which “the goals of the project were clearly understood, the goals were clearly measurable, the prioritization among the top three goals was clear” (this shows how several constructs required multiple measures). In doing so, the authors did not simply invent measures but looked in previous literature across several disciplines (such as IT and engineering) to see how such constructs had been translated into measures before (Benaroch & Chernobai, 2017; Chua et al., 2012; Constantinides & Barrett, 2015; Dawson et al. 2016; Gopal & Gosain, 2010; Huber et al., 2017; Langer et al., 2014; Mani et al., 2014; Moeini & Rivard, 2019; Oliveira & Lumineau, 2017; Sabherwal et al., 2019; Tallon et al., 2013; Tian et al., 2015; Tiwana & Kim, 2015; Tiwana & Konsynski, 2010; Wu et al., 2015; Young Bong et al., 2017). As a result, the measures that we identified were not arbitrary inventions but had been tested and validated previously. This step resulted in 90 validated measures (including outcome measures).

  3. 3.

    It is still not feasible for senior participants to respond to 90 measures (and thus 90 questions) in a questionnaire within an acceptable time frame. Therefore, we condensed the questions by identifying measures with significant overlap and reduced them to 41, corresponding to 7 pages, which was judged acceptable through a prototype test with volunteer respondents. In addition, the questionnaire included some information about the role of the respondent in the respective project and about the size and outcomes of the project. The complete questionnaire is shown in Appendix.

  4. 4.

    Each questionnaire was sent to three respondents from each project: a project owner (a senior civil servant from the agency that owned the project and who was responsible for its goals), a project supervisor (a mid-level civil servant who was part of the organization that supervised and worked with the contractors that executed the project) and a project manager (an employee of the main contractor). Thus, three different perspectives of the project were represented: the strategic perspective of the owner, the execution perspective from the government side and the execution perspective from the contractor side.

  5. 5.

    Each respondent was approached by means of a personal letter from the lead author, in many cases followed up by a phone call. All respondents were guaranteed anonymity. For 38 of the targeted 40 projects, all 3 respondents agreed to participate. Each respondent was visited by a research assistant, who sat down with the respondent, who explained the questionnaire and was immediately available to clarify questions and interpretations and who ensured that the questionnaire was completed in full.

  6. 6.

    The completed questionnaires were coded in Cambridge by a separate research assistant and then analysed by the authors.

The result of this process was a data set of 114 questionnaires (3 from each project), with project outcome information and 41 measures of success drivers that had been validated by theory and by previously used measures in wider project management research. This data set formed the basis of the analyses reported in Chap. 5.

3.3 Construction of the Sample of Projects

Constructing a database of large government projects that enables a systematic comparison of successes and failures is difficult. In the absence of systematic data (the reader may remember that the commission that found a 63% abandonment rate of large government projects did not publish a list!), the projects had to be identified and paired for comparison, and the representatives of the abandoned projects had to be convinced to provide responses.

This took significant effort, time and investment of social capital. Business schools all over the world (including in Nigeria) are drowning in case studies of companies that have succeeded. Companies (and government agencies) love to talk about successes, and they use case studies as marketing tools to showcase to students how great they are. But take a look at how many failures are discussed in public, and you will find that there are very few. Organizations (even more than individuals) loathe speaking about their failures because they fear damaging their external image. Add to this the pressure on large government projects in Nigeria from the press and the public, and the reader may understand why no one has yet constructed this kind of data—not because no one cared but because it is difficult to do.

Table 3.1 presents the sample that the authors were able to construct. It contains 19 completed and 19 abandoned projects (of the targeted 40). Because of the abovementioned challenges, this sample is, to some degree, “opportunistic”: Which projects could we find that were completed versus abandoned, and which ones had senior managers who were willing to respond to a questionnaire? The sample is not arbitrary but consists of matched pairs—a pair of projects belongs to the same sector, has a similar budget size and, if possible, was carried out by the same contractor (the latter was possible only in around a third of the cases).

Table 3.1 The sample of projects in this study

The matching reassures us that the outcome differences were not caused by large differences in context, complexity (the sector) or budget size, or by the abandoned projects somehow having worked with less competent contractors. The matching increases our confidence that the variables measured in the questionnaire indeed captured the differences between the paired projects. Collectively, this sample covers key sectors of government investment—roads, airports, power stations, ports, housing, ICT systems, waste management, hospitals, education and social projects. This increases our confidence that our findings do not just describe one specific sector but really do capture systematic elements of how the Nigerian government manages its large investment projects. Each project is presented in more detail in Chap. 4.

3.4 Construction of the Case Studies

Earlier, we discussed the limitations of surveys: although the quantitative analysis can demonstrate that there are systematic differences between the management practices of completed and abandoned projects, the variables are stylized. Therefore, the econometric analysis in Chap. 5 remains conceptual; it does not bring to life what the project problems looked like; it does not illustrate the causality of how the success drivers “drive” success; and because the questions represent the theoretical lens of our framework from previous professional knowledge, they may overlook “other” things that happened, which may offer “other” explanations. Therefore, we have chosen 11 of the projects in the sample for more detailed case studies that “bring the story to life”.

The 11 projects are again matched pairs, comprising 1 completed and 1 abandoned: 2 education projects (Abuja National Library and Obasanjo Presidential Library), 2 bridges (Third Mainland Bridge and Second Niger Bridge), 2 roads (Lagos-Ibadan Express Road and Lagos-Badagry Express Road), 4 power plants (Egbin versus Calabar Power Stations, and Zungeru Hydropower Plant versus Delta State Power Plant) and the 1 steel project in the sample, the Ajaokuta Steel Project, chosen for its size and prominence.

To write these case studies, the authors visited the sites and interviewed people on location, as well as in the ministries where decisions had been made. The interviews lasted 1–2 hours (some of which covered more than one case), and site visits lasted at least half a day each. The interviews are listed in Table 3.2. As is recommended by case study method experts (Yin, 2014), interview and site visit notes were written on the same day that the interviews took place. Later, the accounts from the interviews were complemented by desk research that cross-checked the accounts and filled in the gaps that the interviewees had not covered.

Table 3.2 List of r espondents interviewed across organizations

It turned out that the case studies did not reveal additional phenomena that had not been included, in principle, in the identified professional knowledge on very large projects. However, the case studies did show how the success drivers worked and how the success drivers interacted with one another (e.g. if the project does not have stable funding, then contractors are tempted to play games in order to secure getting paid), as our narratives demonstrate in Chaps. 6, 7, 8, 9, 10, and 11. Moreover, the case studies reinforced the observation from the econometric analysis (Chap. 5) that there were consistent themes, across projects and sectors, regarding how the Nigerian government managed its large infrastructure projects in ways that turned out to be self-damaging.