Abstract
We examine the main existing challenges that currently arise in the assessment of European Union (EU) funds devoted to three thematic objectives (TOs): Research and Innovation (R&I); Low-carbon economy (LCE); and Information and Communication Technologies (ICT). In this regard, a literature review on the European Regional Development Fund (ERDF) initiatives is performed, with a special focus on the Portuguese (PT) case, also addressing their assessment and reporting practices. Data systematization is coupled with the European Commission (EC)’s main guidelines and with the guidance recommendations brought by Management authorities (MA) for the 2014–2020 period. A bibliometric analysis is conducted to further understand the current research interest in the evaluation of EU funds, and the type of assessment methods and reporting practices employed. Most of the approaches rely on cost–benefit analysis and place less attention on data availability, variable selection, and monitoring/assessment options. The selection and application of the framework indicators, either related to their financial execution or achievement, are assumed as critical factors concerning the monitoring, reporting, and assessment processes. Our findings emphasize the need for harmonization and simplification of the reporting techniques, also highlighting the sparse data availability and some reporting conflicts.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Despite the prolific number of studies investigating the effects of EU funds, empirical evidence shows mixed, and, sometimes, contradictory, results (Gramillano et al., 2018a, b). While some authors found a positive effect of EU funds on the economic growth of regions (Puigcerver-Penalver et al., 2007), others highlighted the existence of a maximum desirable level of funds transfer, beyond which the funds may increase regional gaps within countries (Kyriacou & Roca-Sagales, 2012). Also, some contributions acknowledged no statistically significant impact of Cohesion Funds (CF) on convergence, underlining how disparities persist in the EU (Dall’Erba & Le Gallo, 2008).
As Scotti et al. (2022) emphasized, although the EU Cohesion Policy has progressively diversified the sectors targeted for funding, with possible heterogeneous impacts on local growth, the literature is still largely oriented to the analysis of aggregate impacts. Thus, these authors proposed a granular investigation of the sectoral impacts of Structural and CF on European NUTS 2 regions for the period 2007–2014 and concluded that expenditures in energy, Research and Development (R&D), and transportation sectors stimulated a higher Gross Domestic Product (GDP) per capita, consistently reducing production costs, increasing accessibility and innovation in the beneficiary regions.
Moreover, the assessment of funded programs is still a largely discussed topic. In this regard, the EC identified two main assessment problems with the system of indicators (Nigohosyan & Vutsova, 2018): difficulties in establishing the cause-and-effect relationships between actions, results, and impacts, due to the influence of external factors; and difficulties in measuring the impacts, because they are usually the cumulative effect of many actions, affect diverse populations, and it takes time for them to show their actual effects. For the 2014–2020 period the EC tried to overcome these difficulties in the ERDF context by discarding the concept of impact indicators and introducing a new intervention logic. Nevertheless, Nigohosyan & Vutsova (2018) claimed that despite the changes, some of the identified problems with the indicators were not solved. Besides, these authors suggested that ongoing evaluations should include activity’s in-depth analysis of the relevance of the selected output indicators. All in all, Nigohosyan & Vutsova (2018) consider that the introduced changes are well-intended, but they still need to be perfected. The system of common output indicators should be re-examined, while the requirements for the result indicators should not be contradictory. The current practices show that even solid and logical principles, such as the fact that EU policies should reflect economic, social, political, and institutional differences to maximize both the local and the aggregate potential for economic growth (Barca & McCann, 2012), might not be sufficient when the different countries select indicators.
Furthermore, in the current programming period (2021–2027), the lessons learned from other programming experiences within and beyond the European Structural and Investment Fund (ESIF) framework can make a difference, since ESIF operates in very different local contexts and handles very heterogeneous economic and social regional environments. In this framework, a previous study by Bachtler & Wren (2006) reported that even if Cohesion Policy had a unified regulatory framework, it should address different national and regional circumstances embedded in a variety of institutional arrangements, bearing in mind that its operations comprise a multiplicity of measures and a diversity of national, regional, and local rules and systems.
Additionally, programs consist of a range of interventions (physical and economic infrastructures, business and technological developments, human resources, innovation, and environmental improvement) based on a mix of financial instruments for many types of beneficiaries. This variety of targets and contextual conditions is per se a challenge for any evaluation exercise (Henriques & Viseu, 2022a, b).
In this vein, Nigohosyan & Vutsova (2018) emphasized that the EC should try, in the next programming period (i.e., 2021–2027), to unify as much as possible the understanding of indicators under the different EU funds. An easy step would be the inclusion of all indicators into a single guidance document. Their study assessed the possibility of expanding the current list of common output indicators and the feasibility of developing a list of common direct result indicators for post-2020 ERDF and CF interventions.
Regarding the allocation of the EU budget, Gramillano et al. (2018a, b) suggested that spatial and sectoral effects can contribute to the design of more effective distribution of EU funds. Also, due to the wide variety of projects financed by CF, the EU Regional Cohesion Policy has been defined as a “do it all policy”. Hence, policymakers are currently focused on the economic impact of investments across different sectors, since heterogeneous levels of local development may be achieved depending on the economic activity in which the EU transfers are allocated (Cortuk & Guler, 2015). Indeed, investments in certain sectors might have immediate positive effects, while other types of investments might generate a significant impact only in a long-term perspective (Scandizzo et al., 2020).
Moreover, the magnitude of economic multipliers might be different across sectors and dependent on the level of diversification and complementarity of expenditures (Auerbach & Gorodnichenko, 2012; Duranton & Venables, 2021).
With the foregoing in mind, we investigate the current major issues inherent to the assessment of ESIF committed to three TOs: R&I, LCE, and ICT. A literature review on ERDF implementations is conducted, with a specific focus on the PT situation, as well as its evaluation and reporting practices.
The research questions that we want to address are the following:
RQ1: “What are the main areas of concern of the studies devoted to the assessment of ERDF?”
RQ2. “What are the main challenges inherent to the selection of the variables/indicators in the assessment of ERDF?”
RQ3. “What are the best practices found in the assessment of ERDF?”
RQ4. “What are the main gaps found regarding the methodologies used to assess ERDF?”
This paper is structured as follows. In Sect. 2 a systematized literature review is conducted. Section 3 goes through the Portuguese case. Section 4 describes the main results found for the Portuguese assessment and reporting systems. Finally, some conclusions are presented, and future work developments are indicated.
2 Literature Review
A systematic literature review has been done on the Web of Knowledge database, using a set of keywords combined with Boolean operators, and it covers a wide number of data sources (i.e., scientific journals, books, proceeding papers, etc.). Data collected from EU Reporting were also attained in the analysis. The bibliographic results were saved in data files and comprise the search over all the publication content. These text files were collected and manipulated using Vosviewer (i.e., https://www.vosviewer.com/) to map the bibliographic content—see Figs. 1 and 2.
By using the keywords “EU funds”, the search returned 28,809 references that were filtered regarding the type of publication and then coupled with our focal interest, that is the “ERDF”. The latter generated 1,200 references, and while combined conjunctively (i.e., using the “AND” operator) with the “EU funds” keyword returned 532 references. These were extracted from the database to be mapped by Vosviewer. Besides, another collection of references was obtained by adding “Portugal” and “Funds” to the “ERDF” keyword.
As can be observed in Fig. 1a the research published so far regarding the EU funds mostly focus on the cohesion policy and, entering a deep detail, “structural funds” and “regional development” appear as commonly referenced keyword, as, mapped in Fig. 2a.
While introducing the timeline into the data collection (Fig. 1b), the global cluster results show that most references are up to 2016 (green). Also, more recent publications depict a higher concern over the following themes: “sustainability”, “smart specialization”, “high education”, or “regional disparities” (yellow). These are topics quite prominent in today’s EU political agenda and are in line with the latest worldwide concerns.
Finally, regarding Portugal (Fig. 2b), most of the studies address “regional development”, but “competitiveness”, “higher education”, “innovation” or “rural development” are also part of the connecting arcs.
Subsequently, the literature review was systematized by grouping the references into four categories: Data and variables selection; Indicators and monitoring; Best Practices; and Methods.
2.1 Data and Variables Selection
The lack of data and the heterogeneous definitions of relevant indicators further complicate the analysis of the Operational Programs (OPs) funded by ERDF (Henriques et al., 2022a, b). Both, policy and economic performance/outcome indicators can be measured/proxied by different variables and the choice of these proxies may have important implications for the results of the various analyses (Pastor et al., 2010). In most cases, the policy variables under study are ‘payments’ or ‘commitments’ and ‘GDP Growth Rate Per Capita’ or ‘Employment Rate’, which are used as proxies for economic performance. Therefore, the choice of the policy variables can be a determinant factor for the design and results of empirical analyses. For example, the use of actual policy ‘expenditure’ data instead of ‘commitments’, means having to consider the duration of the entire OP (Crescenzi & Giua, 2016).
Additionally, the data available to perform the assessments varies considerably with the type of TO under scrutiny. In the case of R&I (TO1), policymakers face additional challenges in the assessment of R&I policies, notably because of the scarcity of suitable data (Henriques et al., 2022a; Ortiz & Fernandez, 2022). The sparsity of regional-level research and data on ICT (TO2) at the firm level has also been highlighted by Ruiz-Rodríguez et al. (2018), Reggi and Gil-Garcia (2021) and Henriques and Viseu (2022a, b). For example, despite the ample set of data available in the latest European Innovation Scoreboard (Hollanders, 2021) with some indicators on ICT (e.g., Digital skills and business sector ICT specialists), Henriques & Viseu (2022a, b) only managed to consider three indicators for assessing the OPs related to boosting ICT adoption in SMEs (i.e., operations supported, eligible costs decided, and eligible spending). An additional difficulty refers to the identification of EU ICT targeted investments (Sörvik & Kleibrink, 2016). Aside from being an economic sector, ICT is still an essential portion of many other activity-related sectors (for example, e-Health) and a tool to assist other activities. Since ESIF actions might have multiple aims, it can be hard to pinpoint the ICT-related activities within the designated categories when the OPs are planned. The OPs’ financial metrics are organized into intervention categories, TOs, and priority domains. Moreover, although the EU guidelines advocate that planned ICT initiatives should be classified primarily underneath TO2, these obtain funds under distinct TOs, and they are also integrated into various smart specialization strategies. For example, to consider the ICT SMEs support, there are only two dimensions of intervention that can be considered (Henriques & Viseu, 2022a, b; Reggi & Gil-Garcia, 2021; Sörvik & Kleibrink, 2016) under codes 4 and 82, that correspond to €1.7 billion and €304 million of planned investments, respectively (Sörvik & Kleibrink, 2016). These totals are available under multi-TO (€810 million), TO2 (€790 million), and TO3 (€349 million) and to a smaller level under TO1 and TO8 (Sörvik & Kleibrink, 2016). As a direct consequence, national and regional policymakers should use additional specific criteria that account for ICT results; tag expenditure that falls under other TO (rather than TO2) but has an ICT component; enhance the quality and completeness of ICT performance data at the regional and SME levels; and unify different data from diverse data sources (Henriques & Viseu, 2022a, b).
Furthermore, as noticed by Henriques et al., (2022a, b), despite the performance framework providing a set of implementation indicators, the data provided is frequently incomplete, and as a result, assessments end up considering a limited number of indicators and OPs. Finally, it is not possible to reach a complete match between the data obtained for the OPs’ achievements and their financial implementation. This is especially true for the investment priority dedicated to SMEs (investment priority 4b), under TO4 (i.e., LCE), which is meant to increase energy efficiency and renewable energies and includes statistics for the accomplishment metrics but not for their financial implementation.
2.2 Indicators and Monitoring
Performance appraisal techniques, as well as the selection of indicators, provide critical insights into supported programs. The different types of indicators are based on the following aspects (European Commission, 2013): relationship with the variables (complete, partial, and complex indicators); information processing (elementary, derived, and compound indicators); information comparability (specific and common indicators); and information scope (context and program indicators). The primary categories of indicators (input, output, and result) are provided in all ESIF monitoring systems; however, there are significant differences. Apart from the division of indicators into quantitative and qualitative, the guidance documents for the ERDF and the European Social Fund (ESF) agree that indicators should be linked to the specific objectives and kept as close as possible to the activities. Regarding the result indicators, the major difference is that ERDF should not be limited to the supported entities since it addresses different local contexts and very heterogeneous economic and social regional realities, whereas ESF result indicators “capture the expected effects on participants or entities brought about by an operation” (European Commission, 2015). Therefore, the ERDF result indicators should capture the change in the Member States (MS), regions, areas, or affected populations because of a particular program. This leads to two main problems: difficulties in establishing a program’s contribution to results and problems determining the target values.
Nonetheless, the results of Nigohosyan & Vutsova (2018) show that not all of the common indicators suggested by the EC follow its guidelines and the new intervention logic model could still lead to unclear logical links between activities, outputs, and results of the programs. These findings led to the recommendation for a mid-term review of the common ERDF indicator system.
Nigohosyan & Vutsova (2018) focused on the evolution of the ERDF indicators in the 2014–2020 programming period. The main question addressed was whether the new indicator system solved the problems of the past. These authors argued that despite the changes, the evolution of the ERDF indicators is incomplete, and it will likely be the reason for serious monitoring deficiencies and evaluation challenges.
A new understanding of ‘impact’ and the exclusion of impact indicators was put forward in the 2014–2020 guidance for evaluating the EU Cohesion Policy and the new ERDF-supported OPs (Nigohosyan & Vutsova, 2018).
Nigohosyan & Vutsova (2018) argued that despite the good justification for these changes, the 2014–2020 ERDF intervention logic and indicator system did not solve some of the well-known problems and brought new challenges to the evaluation of the ERDF-supported programs. The main challenges that remain to be solved are differing indicator concepts across the EU funds; inconsistency of the common output indicators; difficulties in establishing a program’s contribution to results; persistent problems in determining the target values for results; and broad result indicators with an indirect link to interventions.
The same idea was reported by the EC (European Commission, 2018) clarifying the core differences between the period 2014–2020 and the ongoing Period 2021–2027. Supported on the key concept of simplification (i.e., more comparable data based on the use of fewer indicators), the 11 TO originated 5 Policy Objectives; the 3,573 Specific Objectives gave place to 21 Specific Objectives. Regarding the Results, the 5,082 program-specific indicators were replaced by 85 common indicators plus program specific; and on the outputs reporting side, 46 common indicators (6,481 records) and 4,813 program-specific indicators, were replaced by 85 common indicators plus program specific.
Also, in the 2014–2020 period, roughly half of the common ERDF output indicators could be viewed as another type of indicator (inputs or results). Indeed, some indicators can be viewed as output indicators in one context, and result indicators in another (e.g., the indicator “Reduction of greenhouse gases”).
All along the developed research, Gramillano et al. (2018a, b), identified and emphasized the need for the harmonization and simplification of the used indicators. More comparable data based on the use of fewer indicators than those defined for the period 2014–2020 is desirable. Apart from the provided framework of common indicators, a match between the data gathered for the achievement indicators and the data from financial implementation is not fully feasible. Moreover, the data reported is often lacking (Henriques et al., 2022a, b). This is particularly true regarding the PT case, where the level of data details and the reporting practices frequently overlook common standards that hamper the programs evaluations and national OPs’ comparability.
2.3 Best Practices
Many authors agree on the importance of “learning from the past”. To make it feasible, some additional care must be placed on the chosen variables, the selected and monitoring indicators, and the implemented methodologies, since these are relevant to enhance that learning process.
Furthermore, to hasten the execution of the OPs, best practices from other MS should be examined, and administrative hurdles to obtaining funds should be reduced. MA should look for methods to boost project delivery by promoting the streamlining of payment request protocols and giving more guidance and support (Henriques et al., 2022b).
Frequently, assessment reports indicate that certain firms have revoked their subsidies, most likely owing to bank credit difficulties. In this respect, MA should be prepared to support businesses in securing additional financing options while also simplifying the conditions for attracting other institutional investors (Henriques et al., 2022b).
Gramillano et al. (2018a, b), presented a system of common indicators for the ERDF and CF interventions after 2020. The analysis performed covers the 11 TOs and is structured in two parts (I, for TOs 1, 3, 4, 5 and 6, and II for TOs 2, 7, 8, 9, 10 and 11). In this study, indicators (i.e., common output and direct result indicators) were evaluated according to their quality assessment, supported by key RACER principles (i.e., R (relevant), A (accepted), C (credible), E (easy to monitor), R (robust)) and other revised criteria (e.g., CREAM matrix assessment that sets out five criteria: Clear, Relevant, Economic and Available at a reasonable cost, adequate to provide information useful to assess performance and able to Monitor).
Moreover, time-bound indicators are also critical since they provide dates for measurement over time and monitoring is based on annual reporting or at least takes place at the end of the project. Besides, a debatable criterion should also be emphasized. Here, MA consultation collects information on the use of common output indicators to conduct benchmarking analyses or at least to use them in the future. This is to verify how much comparable information from 2014–2020 common outputs has been exploited since comparability is a major advantage of common indicators.
Globally, apart from the continuity of the best achievements from 2014–2020, the EC focused on: (i) the match of common outputs and results for interventions; (ii) simplification, harmonization, and data comparability; (iii) broader policy coverage; (iv) flexibility; (v) alignment with ESF; (vi) aggregation from project level; (vii) RACER—Financial Regulation criteria (Gramillano et al., 2018a).
Some clarification on the common ERDF and CF indicators was prompted by the EU (2018, Annex I and II), namely for the indicators to be selected in the programs, the data to be collected from projects via the monitoring systems, and the aggregated data reported by MS to the EC.
So, as suggested by Nigohosyan & Vutsova (2018), the EC should try in the next programming period to unify as much as possible the understanding of indicators under the different EU funds. An easy step would be the inclusion of all indicators into a single guidance document.
2.4 Methods
The literature evaluation identified desk research, monitoring data/data analysis, interviews, focus groups/facilitated workshops, surveys, and case studies as the major applied techniques to analyze ERDF TOs. Notwithstanding, the MS efforts to improve cohesion policy appraisal, only very few evaluations use more reliable methodologies, such as statistical methods or other mathematical techniques (Henriques et al, 2022a, b). Non-parametric approaches, like DEA, have turned into a noteworthy methodological alternative to the traditional approaches employed in similar contexts. The key benefit of utilizing this mathematical approach is the type of information that it can provide to MA on the inefficiency of the OPs when compared to their counterparts.
The benchmarks of inefficient OPs are also determined by DEA, and significant information about the best practices to follow to reach efficiency may be obtained. Nonparametric approaches, such as DEA, may readily manage many assessment criteria. Furthermore, DEA can help identify the key reasons that hamper efficiency, supplying policymakers with relevant knowledge on how to solve them. For example, Gómez-García et al. (2012) evaluated the efficiency of the implementation of ESIF allocated to TO1 in this context. Furthermore, Gouveia et al. (2021) used the Value-Based DEA method to evaluate the implementation of an ESIF aimed at enhancing the competitiveness of SMEs throughout multiple OPs (national and regional). In addition, Henriques et al. (2022b) evaluated 102 OPs from 22 EU nations dedicated to supporting an LCE in SMEs using the output-oriented variant of the slack-based measure (SBM) paired with cluster analysis. Lastly, Henriques et al. (2022a) evaluated 53 OPs from 19 MS committed to boosting R&I in SMEs using the non-oriented form of the network SBM approach in conjunction with cluster analysis.
Furthermore, DEA models are easily adaptable to evaluate different TOs if the basic rule proposed by Golany et al. (1989) is followed, namely, the number of DMUs (in this case, the OPs, EU funds, regions, countries, etc.) under evaluation should at least double the number of inputs and outputs (the indicators used in the evaluation). Though DEA offers undeniable benefits over other conventional methods (for example, microeconomic analyses that utilize control groups and case study analysis), there is currently a dearth of academic interest in its application in the context of ESIF efficiency appraisal.
This form of analysis is especially important if the programs are still in progress since it allows MA to predict the influence that prospective changes in output/input levels would have on the levels of efficiency attained by the OPs.
Unlike other approaches and methodologies that are specially applied for ex-post or ex-ante evaluation of cohesion policies, the DEA approach allows us to assess the efficiency of OPs’ deployment across the programmatic time horizon (thus allowing us to perform midterm/terminal assessments), so that the required initiatives can be implemented within the time necessary for producing the appropriate changes during the timeframe in headway.
Due to the lack of more robust approaches during midterm/terminal assessments, the adoption of nonparametric methodologies can be particularly beneficial and suitable, mostly because the existing metrics for appraising the Cohesion Policy can be employed with other methodologies and contextual indicators. This can be done by combining this sort of analysis with Stochastic Frontier Approach (SFA), for example, thus allowing us to understand if the inefficient results obtained are mainly related to managerial failures or to the contextual environment or statistical noise (Henriques & Viseu, 2022a, b).
3 The Portuguese Case: Main Findings
For the PT OPs, different funding dimensions on ERDF were exploited and characterized. Apart from data collection, data curation (i.e., identification of missing and null values, data consistency, etc.), data characterization (i.e., evaluation of the main statistics), and some data visualizations were used to analyze the PT public dataset.
The ESIF comprises the allocation of €461 billion and is distributed by six funds—See Fig. 3a. In 2014, a partnership agreement (PA) between Portugal and the European Commission, called Portugal 2020 (P2020), was signed. Based on that partnership an ESIF budget of €25,860 million was assigned to Portugal, which represents about 5% of the ESIF for all MS and about 2,561€ per capita. The P2020 programming and implementation were divided into four thematic domains (Vaquero et al., 2020): competitiveness and internationalization, social inclusion and employment, human capital and sustainability and efficiency in the use of resources, and seven regional operational programs (OPs). The distribution of the ESIF funds shows that approximately 79% of the total amount assigned to Portugal, goes to the three funds included in the Cohesion Policies, that is, ERDF (45.3%), ESF (24.7%), and CF (9%) (European Commission, 2022).
In terms of eligibility for the ESIF (ERDF, CF, ESF, EAFRD, and EMFF), the seven Portuguese regions are divided into Less developed regions (GDP per capita <75% EU average): Norte, Centro, Alentejo, and the Azores, with a co-financing rate of 85%; Regions of transition (GDP per capita between 75 and 90%): Algarve, with a co-financing rate of 80%; and More developed regions (GDP per capita >90%): Lisboa and Madeira, with a co-financing rate of 50% to Lisboa and 85% to Madeira. Figure 3 depicts the execution of the EU funds in Portugal per type of region. As formerly stated, the programming and implementation defined by the PA for Portugal are based on four key thematic domains, also considering two transversal dimensions and seven OPs regarding the integrated intervention at the territorial level (Vaquero et al., 2020).
The defined TOs are concentrated on a series of priorities as R&I (TO1), ICT (TO2), business competitiveness (TO3), and the LCE (TO4) that fulfill regulatory requirements (74% in less developed regions; 69% in the region of transition of Algarve; 67% in the outermost regions of Madeira and Azores; 73% in the more developed region of Lisbon). According to the data available on the P2020 website, about 95% of the budget allocated to Portugal was already spent.
Regarding the evaluation of the PT programs, many difficulties were found regarding data collection. These were due namely to the usage of different reporting nomenclatures across the various PT regions and OPs (see e.g., the reports available at https://portugal2020.pt/portugal-2020/o-que-e-o-portugal-2020/). The autonomous reporting of the lists of Operations referring to P2020 can partially justify those differences (e.g., an ID of investment priority named as 1.1, 1.a, and 1a; the same about indicator CO01 or O.04.02.02.C for the same indicator in investment priority 4.b). Also, these generate some inconsistencies while comparing data at the regional level (i.e., Norte2020, Centro 2020, Alentejo2020, Lisboa2020, Algarve2020 and Madeira e Açores 2020). Another important issue regarding the reporting practices is related to the monitoring periods. The details on data are not uniform for the various programs or the monitored variables. Therefore, making it very hard or even impossible the development of comparative studies.
Concerning the assessment indicators, the EU Observations Report on the PT-funded programs noticed a very positive evolution in Norte OP regarding the construction and definition of the indicator framework. That evolution resulted from (i) the interaction between the regional entities responsible for the programming process and the evaluation team; (ii) the process of harmonizing the bases of indicators at the national level; (iii) the interaction between national entities and the EC (European Commission, 2014a, b).
Following the main funding priorities for the PT programs, our study focuses on three TOs: LCE; R&I; and ICT. To perform the analysis, data were collected from public EU sources on ERDF, the dataset was afterward analyzed, and some preliminary results are illustrated in Fig. 3a and b.
As it can be observed in Fig. 3a, for the “Less Developed Regions”, the programs showing higher values are “Competitiveness and Internationalization” (i.e., TO3) followed by the OPs of “Norte” and “Centro” regarding both, the “Total Eligible Costs Decided” and the “Total spending (Eligible Expenditure Declared)” (see the ESIF_2014-2020 categorization file available at https://cohesiondata.ec.europa.eu/funds/erdf).
In 2020, more developed regions, “Lisboa” and “Madeira”, registered quite different behaviors, with Madeira having a “Total spending” around half of the “Decided value”, while Lisboa registered a lower rate of execution. As “Outermost Regions”, Azores and Madeira (i.e., funds 17,740,636,416€ and 11,207,915,405€) both presented an even better execution rate in 2020—see Fig. 3b.
In less developed PT regions, i.e., Norte, Centro, Alentejo, and Azores; the highest “Decided values” were assigned to Multi Thematic Objectives, immediately followed ex aequo by “Educational and Vocational Training” and R&I, subsequently followed by the “Competitiveness of Small and Medium Enterprises”. The LCE and ICT are in the detached area of the graphic (i.e., split for representing less than 10%), with percentages around 1% or even lower—see Fig. 4.
Concerning the regions of Transition i.e., Algarve; the highest percentage is allocated to “SMEs’ Competitiveness” with 43% of the funds. R&I, LCE and ICT have higher representativeness than that observed in less developed regions, but the decided value represents less than 10% of the total, see the detached area of the graph—Fig. 5.
In the more developed regions, the R&I TO appears again with a high representative percentage, followed by the SMEs’ competitiveness (TO3)—see Fig. 6. The TOs related to LCE, and ICT registered again a lower percentage of the “Total decided”.
For the “Total eligible costs decided” it can be noticed that Multiple Thematic Objectives have the highest values, followed by R&I (TO1) and by Competitiveness of SMEs (TO3), whereas LCE (TO4) and ICT (TO2) obtain the lowest values—see Fig. 7a. Similar behavior is observed for the “Total spending”, only with the “Educational and Vocational Training” registering the highest values—see Fig. 7b.
4 Conclusions
This study explores the present key challenges associated with the evaluation of ERDF committed to three TOs: R&I, LCE, and ICT. A study of the literature on ERDF deployment is done, with a particular emphasis on the PT case, as well as its assessment and reporting processes.
The answers to our research questions are given below.
RQ1: “What are the main areas of concern of the studies devoted to the assessment of ERDF?”
The literature review allowed us to conclude that more recent publications dedicated to the assessment of ERDF show a higher concern over the following themes: “sustainability”, “smart specialization”, “high education” and “regional disparities”. In the case of Portugal, most of the studies focus on “regional development”, also contemplating concerns on “competitiveness”, “higher education”, “innovation” or “rural development”.
RQ2: “What are the main challenges inherent to the selection of the variables/indicators in the assessment of ERDF?”
Overall, there is a lack of data availability that makes it difficult to assess all the OPs targeted to be funded. Besides, there is no full match between the financial data and the corresponding achievements per TO and dimensions of intervention.
We were also able to ascertain that the data available to perform the evaluations differ significantly with the type of TO under scrutiny. In this sense, there is scarce data availability on ICT (TO2) both at the regional and firm levels.
Regarding the PT case, the data analyses allowed us to conclude that there is a significant gap in some of the ERDF initiatives, particularly for the TO2 and TO4, i.e., ICT and LCE. Besides, it was possible to identify a scarcity of data publicly available for the PT OPs, in general, and various reporting conflicts. These issues hampered the possibility of performing deeper analysis, involving, for example, an in-depth comparison across regions’ performance or even enabling productivity analysis.
RQ3: “What are the best practices found in the assessment of ERDF?”
We were able to conclude that the best practices highlight the necessity for indicator harmonization and simplification. A possible way to do this would be the inclusion of all indicators in a single guidance document. It would be desirable to further enhance the quality and comprehensiveness of ICT performance data both at the regional and SMEs levels. Finally, it would be preferable to have more comparable statistics based on the usage of fewer metrics than those established from 2014 to 2020.
RQ4: “What are the main gaps found in the methodologies used to assess ESIF funds?”
Only a small number of assessments employ more consistent methodologies, such as statistical analyses or other mathematical tools. Non-parametric methods, such as DEA, have emerged as a significant quantitative option to the standard methodologies used in comparable circumstances. The primary advantage of employing this mathematical technique is the source of data that it can supply to MA on the inefficiency of the OPs when compared to their peers.
DEA also determines the benchmarks of inefficient OPs, and relevant data on the best procedures to follow to achieve efficiency may be gathered. Nonparametric techniques, such as DEA, may handle multiple evaluation criteria. Additionally, DEA can assist in identifying the primary causes of inefficiency, providing policymakers with helpful information on how to address them. Moreover, the DEA approach is easily adaptable to assess various TOs. This type of analysis is particularly relevant if the programs are already in progress because it enables MA to foresee the impact on the efficiency of future changes in output/input levels. Because more robust techniques are lacking during midterm/terminal evaluations, the use of this nonparametric methodology can be especially advantageous and appropriate, because the current metrics for evaluating the Cohesion Policy can be combined with other methods and contextual factors. This may be accomplished by integrating this type of study with the SFA, for example, enabling us to determine if the inefficient outcomes achieved are mostly due to managerial failings, the contextual environment, or statistical noise.
All in all, our findings emphasize the need for harmonization and simplification of the usage of indicators to evaluate the funded OPs. Besides, an important effort should be placed on the reporting of results to allow for better assessments and to avoid poor outcomes. Finally, we have identified a trade-off between the required detail of the achievements reported and the number of indicators used to support their description, i.e., comprehensiveness versus simplicity.
References
Auerbach, A. J., & Gorodnichenko, Y. (2012). Measuring the output responses to fiscal policy. American Economic Journal: Economic Policy, 4(2), 1–27. https://doi.org/10.1257/pol.4.2.1
Bachtler, J., & Wren, C. (2006). Evaluation of European Union cohesion policy: Research questions and policy challenges. Regional Studies, 40(02), 143–153. https://doi.org/10.1080/00343400600600454
Barca, F., McCann, P., & Rodríguez-Pose, A. (2012). The case for regional development intervention: Place-based versus place-neutral approaches. Journal of Regional Science, 52(1), 134–152. https://doi.org/10.1111/j.1467-9787.2011.00756.x
Cortuk, O., & Guler, M. H. (2015). Disaggregated approach to government spending shocks: A theoretical analysis. Journal of Economic Policy Reform, 18(4), 267–292. https://doi.org/10.1080/17487870.2014.951046
Crescenzi, R., & Giua, M. (2016). The EU Cohesion Policy in context: Does a bottom-up approach work in all regions? Environment and Planning A: Economy and Space, 48(11), 2340–2357. https://doi.org/10.1177/0308518X16658291.
Dall’Erba, S., & Le Gallo, J. (2008). Regional convergence and the impact of European structural funds over 1989–1999: A spatial econometric analysis. Papers in Regional Science, 87(2), 219–244. https://doi.org/10.1111/j.1435-5957.2008.00184.x
Duranton, G., & Venables, A. J. (2021). Place-based policies: principles and developing country applications. In Handbook of regional science (pp. 1009–1030). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-60723-7_142.
European Commission. (2013). Results indicators 2014? Report on pilot tests in 23 regions/OPs across 15 MS of the EU. Retrieved 30/05/2022. https://ec.europa.eu/regional_policy/sources/docoffic/2014/working/result_indicator_pilot_report.pdf.
European Commission (2014a). Resumo do acordo de parceria para Portugal. 2014, 2014–2020. Bruxelas. Retrieved 30/05/2022. https://ec.europa.eu/info/sites/default/files/partnership-agreement-portugal-summary-july2014_pt.pdf.
European Commission. (2014b). Commission observations—partnership agreement Portugal. Bruxelas. Retrieved 30/05/2022. https://ec.europa.eu/info/sites/default/files/partnership-agreement-portugal-observations-may2014b_en.pdf.
European Commission. (2015). Guidance document on monitoring and evaluation—European regional development fund and Cohesion fund. Concepts and recommendations. Retrieved 30/05/2022, http://ec.europa.eu/regional_policy/sources/docoffic/2014/working/wd_2014_en.pdf.
European Commission. (2018). ERDF & Cohesion fund indicators 2021+, 29 November 2018. https://ec.europa.eu/regional_policy/sources/docgener/evaluation/doc/29112018/6_Indicators2021_evalnet.pdf.
European Commission. (2022). European structural and investment funds. Country Data for Portugal. Retrieved 01/08/2022. https://cohesiondata.ec.europa.eu/countries/PT.
Golany, B., & Roll, Y. (1989). An application procedure for DEA. Omega, 17(3), 237–250. https://doi.org/10.1016/0305-0483(89)90029-7
Gómez-García, J., Enguix, M. D. R. M., & Gómez-Gallego, J. C. (2012). Estimation of the efficiency of structural funds: a parametric and nonparametric approach. Applied Economics, 44(30), 3935–3954. https://doi.org/10.1080/00036846.2011.583224.
Gouveia, M. C., Henriques, C. O., & Costa, P. (2021). Evaluating the efficiency of structural funds: An application in the competitiveness of SMEs across different EU beneficiary regions. Omega, 101, 102265. https://doi.org/10.1016/j.omega.2020.102265
Gramillano, A., Familiari G., Alessandrini, M., Čekajle, V., Vozab, J., Gassen, N.S., & European Commission (2018b). Development of a system of common indicators for European Regional Development Fund and Cohesion Fund interventions after 2020—Part II—Thematic Objective 2, 7, 8, 9, 10, 11., t33 srl, SWECO consortium, 26th July—2018b. Retrieved 30/05/2022. https://ec.europa.eu/regional_policy/sources/docgener/studies/pdf/indic_post2020/indic_post2020_p2_en.pdf.
Gramillano, A., Familiari G., Alessandrini, M., Čekajle, V., Vozab, J., & Gassen, N.S. European Commission. (2018a). Development of a system of common indicators for European Regional Development Fund and Cohesion Fund interventions after 2020—Part II—Thematic Objective 1, 3, 4, 5, 6., t33 srl, SWECO consortium, 26th July—2018a. Retrieved 30/05/2022. https://ec.europa.eu/regional_policy/sources/docgener/studies/pdf/indic_post2020/indic_post2020_p1_en.pdf.
Henriques, C., & Viseu, C. (2022a). Are ERDFs Devoted to Boosting ICTs in SMEs Inefficient? Insights through different DEA models. In EU Cohesion Policy Implementation - Evaluation Challenges and Opportunities. In C. Henriques., C. e Viseu, (Eds.), Proceedings of the 1st International Conference on Evaluating Challenges in the Implementation of EU Cohesion Policy (EvEUCoP 2022), (pp. 23–32). Springer.
Henriques, C., & Viseu, C. (2022b). Are ERDFs Devoted to Boosting ICTs in SMEs Inefficient? Further insights through the joint use of DEA with SFA models. In EU Cohesion Policy Implementation - Evaluation Challenges and Opportunities. In C. Henriques, C. e Viseu, (Eds.), Proceedings of the 1st International Conference on Evaluating Challenges in the Implementation of EU Cohesion Policy (EvEUCoP 2022), (pp. 33–43). Springer.
Henriques, C., Viseu, C., Neves, M., Amaro, A., Gouveia, M., & Trigo, A. (2022a). How efficiently does the EU support research and innovation in SMEs? Journal of Open Innovation: Technology, Market, and Complexity, 8(2), 92. https://doi.org/10.3390/joitmc8020092
Henriques, C., Viseu, C., Trigo, A., Gouveia, M., & Amaro, A. (2022b). How efficient is the cohesion policy in supporting small and mid-sized enterprises in the transition to a low-carbon economy? Sustainability, 14(9), 5317. https://doi.org/10.3390/su14095317
Hollanders, H. (2021). Regional innovation scoreboard 2021. Brussels, Belgium: European Commission. https://doi.org/10.2873/674111.
Kyriacou, A. P., & Roca-Sagalés, O. (2012). The impact of EU structural funds on regional disparities within member states. Environment and Planning C: Government and Policy 30(2), 267–281. https://doi.org/10.1068/c11140r
Nigohosyan, D., & Vutsova, A. (2018). The 2014–2020 European regional development fund indicators: The incomplete evolution. Social Indicators Research, 137(2), 559–577. https://doi.org/10.1007/s11205-017-1610-8
Ortiz, R., & Fernandez, V. (2022). Business perception of obstacles to innovate: Evidence from Chile with pseudo-panel data analysis. Research in International Business and Finance, 59, 101563. https://doi.org/10.1016/j.ribaf.2021.101563
Pastor, J., Pons, E., & Serrano, L. (2010). Regional inequality in Spain: permanent income versus current income. The Annals of Regional Science, 44(1), 121–45. https://doi.org/10.1007/s00168-008-0236-9.
Portugal. Portugal 2020—Lista de Operacões Aprovadas. (2020b). Retrieved 23.07.2022. https://www.portugal2020.pt/content/lista-de-operacoes-aprovadas.
Portugal. Webpage P2020a. (2020a). Retrieved 23.07.2022. www.portugal2020.pt.
Puigcerver-Peñalver, M. C. (2007). The impact of structural funds policy on European regions growth. A theoretical and empirical approach. The European Journal of Comparative Economics 4(2), 179–208. https://doi.org/10.1016/j.giq.2020.101562
Reggi, L., & Gil-Garcia, J. R. (2021). Addressing territorial digital divides through ICT strategies: Are investment decisions consistent with local needs? Government Information Quarterly, 38(2), 101562. https://doi.org/10.1016/j.giq.2020.101562
Ruiz-Rodríguez, F., Lucendo-Monedero, A. L., & González-Relaño, R. (2018). Measurement and characterisation of the digital divide of Spanish regions at enterprise level. A comparative analysis with the European context. Telecommunications Policy 42(3), 187–211. https://doi.org/10.1016/j.telpol.2017.11.007
Scandizzo, P. L., & Pierleoni, M. R. (2020). Short and long-run effects of public investment: Theoretical premises and empirical evidence. Theoretical Economics Letters, 10(04), 834. https://doi.org/10.4236/tel.2020.104050
Scotti, F., Flori, A., & Pammolli, F. (2022). The economic impact of structural and Cohesion funds across sectors: Immediate, medium-to-long term effects and spillovers. Economic Modelling, 111, 105833. https://doi.org/10.1016/j.econmod.2022.105833
Sörvik, J., & Kleibrink, A. (2016). Mapping EU investments in ICT-description of an online tool and initial observations. Luxembourg: Publications Office of the European Union. Retrieved 30/05/2022. https://op.europa.eu/en/publication-detail/-/publication/6fd75d00-44d0-11e6-9c64-01aa75ed71a1/language-en.
Vaquero, P., Dias, M. F., & Madaleno, M. (2020). Portugal 2020: Improving energy efficiency of public infrastructures and the municipalities’ triple bottom line. Energy Reports, 6, 423–429. https://doi.org/10.1016/j.egyr.2020.11.195
Acknowledgements
This work has been funded by European Regional Development Fund in the framework of Portugal 2020—Programa Operacional Assistência Técnica (POAT 2020), under project POAT-01-6177-FEDER-000044 ADEPT: Avaliação de Políticas de Intervenção Cofinanciadas em Empresas. INESC Coimbra and CeBER are supported by the Portuguese Foundation for Science and Technology funds through Projects UID/MULTI/00308/2020 and UIDB/05037/2020, respectively.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this paper
Cite this paper
Amaro, A., Henriques, C., Viseu, C. (2023). EU Operational Programmes Reporting: From Basics to Practices. In: Henriques, C., Viseu, C. (eds) EU Cohesion Policy Implementation - Evaluation Challenges and Opportunities. EvEUCoP 2022. Springer Proceedings in Political Science and International Relations. Springer, Cham. https://doi.org/10.1007/978-3-031-18161-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-18161-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18160-3
Online ISBN: 978-3-031-18161-0
eBook Packages: Political Science and International StudiesPolitical Science and International Studies (R0)