Keywords

1 Introduction

Trust goes hand in hand with the introduction of novel voting methods. This is particularly true when these new technologies involve the extensive use of information and communications technology (ICT), as is the case with internet voting (i-voting). Creating, building, and establishing trust is an important and challenging task, which should be undertaken before i-voting is even offered to voters, as it is commonly seen as a conditio sine qua non for its adoption. Such a view is unsurprising as elections lie at the heart of democracy and are a joint exercise of a mutually unknown multitude of voters, aimed at transferring power to elected representatives, which presents a basis for building trust in society in all other matters. Elections and voting are, therefore, trust exercises in themselves, and trust in the used voting method is imperative to fulfil its societal purpose.

In recent years, interest in studying trust in internet voting (i-voting) and the closely related, but distinct, electronic voting (e-voting) has increased, although the main body of research has been conducted from a technical perspective [15]. Surprisingly, research on trust repair and its related aspects in the context of i-voting (e.g., areas of potential trust violation, and trust repair strategies and mechanisms, and preventive tactics) has not garnered much interest in the literature. This has been the case not for the lack of i-voting situations that might be perceived as damaging to voters’ trust and necessitating consequent action for trust repair, but those events and responses have not yet been studied through the lens of trust repair.

This article seeks to address that research gap and to open a new direction in the research on trust in i-voting. First, it provides definitions of terms relevant to trust in i-voting and presents a review of existing research on trust repair in areas such as marketing, management, organisation, and civil society research. Second, the article incorporates these insights into existing concepts used for i-voting research. This results in a conceptual framework for voters’ trust repair, with each element explained in turn. Third, the article presents findings relevant to trust repair in i-voting and provides few notable real-world examples from cases of trust violation and trust repair in i-voting. Practical implications and applicability for i-voting practitioners and researchers are discussed, highlighting the importance of timely detection and addressing of trust violations, and offering suggestions on how strategies for trust repair can be crafted. Finally, the article provides avenues for future research on trust repair in i-voting, with open questions to be answered.

2 Theoretical Background and Literature Review

2.1 Trust in I-voting

Since i-voting became an object of scientific interest, trust has been one of the most used and mentioned terms, although definitions and understandings vary among scientific fields. Moreover, trust has not usually been addressed as a principal element but instead as an ancillary component in research. The lead in i-voting research has been taken by computer science, which shaped the approach to and definition of trust. Computer scientists and social scientists, at first glance, appear to have had opposite objectives when it comes to trust – while computer scientists have viewed the need for trust as something ‘bad’ and have focused on minimising parts of voting technologies which one had to trust, social scientists (or more precisely psychologists) have tended to focus on maximising trust, viewing it as a desirable ‘good’ in and of itself [32]. This has resulted in two differing views on trust: bad trust, i.e., “something that people establish because they have to, not because the system is inherently trustworthy”, and good trust, i.e., “something that people establish because they want to, owing to the system’s trustworthiness” [32]. Without delving into this differentiation any further, which may not even be fruitful, our understanding of trust in this article is in line with the social science understanding of trust as a desirable characteristic, “a mechanism that helps us to reduce social complexity” [32], thus serving as “the bond of society or a lubricant for social relations” [34].

Although trust might be characterised as “an immaterial bond, including subjective evaluations and social projections” [13], which is somewhat vague and amenable to an emotivist or sentimental reading, its deliverables are “hard and measurable results” [2]. The desirability of trust and subsequent palpable effects of trust initiate a debate about what constitutes trust in i-voting and how the trustor and trustee should be defined. Trust in i-voting research is relied on and borrows from adjacent scientific fields, particularly trust in technology research in which two sets of beliefs are inconsistently utilised in trust constructs, with a tension between the human-like and system-like attributes of technology [25]. Lankton, McKnight, and Trip demonstrate that the two sets of trusting beliefs are compatible by pairing human-like beliefs (integrity, ability/competence, benevolence) with corresponding system-like trusting beliefs (reliability, functionality, and helpfulness) [25]. In the first pairing, “integrity”, the belief that a trustee adheres to principles acceptable to the trustor, corresponds to “reliability”, the belief in the consistent proper operation of the technology. The second pairing involves two human-like beliefs on one part, which are “ability”, i.e., the trustee’s skills, competencies, and characteristics that influence a specific domain, and “competence”, i.e., the belief that the trustee has the ability to do what the trustor needs done, which corresponds with “functionality”, the technology's capability, functions, or features to meet the trustor's needs, on the other part. The third and final pairing is between “benevolence”, the belief that the trustee will act in the trustor's interest beyond selfish motives, and “helpfulness”, the belief that the technology provides adequate and responsive support to users. It is argued that the distinctive feature in choosing trusting belief constructs to apply to a specific technology is the level of humanness, i.e., “aspects of technologies and users’ interactions with technologies that can make them seem more or less human-like and, thereby, exhibit different levels of ‘humanness’” [25]. The authors conclude that the technology’s humanness has to be addressed when considering technology trust constructs and that the type of trusting beliefs has an influence on outcomes – human-like trust beliefs have stronger influence when technology is perceived as high in humanness by users, and vice versa, that system-like trust beliefs have stronger influence if technology is perceived as low in humanness. In the context of i-voting, Erb, Duenas-Cid, and Volkamer argue that “the trustee is no longer a moral agent but a technological artifact created by humans that has limited capabilities” while also noting that “the role played by those stakeholders having the capacity to provide trust or distrust of the system even if not directly related to its functioning” should be acknowledged [15]. The authors choose to mainly focus on system-like trusting beliefs when assessing trust in i-voting, simultaneously asserting that the nature of trust in this context is considerably more complex and intertwined and cannot be simply reduced to trust in the technological dimension of i-voting.

I-voting is commonly introduced as supplementary to traditional voting methods, primarily paper voting. As such, it is subject to high expectations and even more rigorous evaluations, backmarked against the standards of these traditional methods. Its distinct characteristics, such as its technological basis and remoteness, create and pose unique complexities and challenges not present in traditional settings. Trust in i-voting is substantially conditioned by its underlying mechanics or operations, which are not as intuitive to a lay voter as traditional paper voting.Footnote 1 Due to its technological complexity and sophistication, voters’ beliefs are, to a certain extent, shaped by external stakeholders’ views on a specific i-voting ecosystem, which is combined with its human and technological dimensions. In the literature, this notion is recognised by Pieters [32], who borrows Luhmann’s argument that one has to reduce complexity to “properly function in a complex social environment”, and Ehin and Solvak [14] who utilise ‘the cue-taking approach’ grounded in theories of bounded rationality, explaining how people are more prone to rely on cues from other trusted social actors under conditions of information scarcity, complexity, high uncertainty, limited time, and low information processing ability. Similarly, Crane [9] uses ‘trustworthiness cues’, while Ferin, Dirks, and Shah introduce the concept of ‘trust transferability’, according to which a third party contributes to providing trust-related information [17]. For the reasons mentioned, trust in i-voting can certainly be considered, at least in part, as intermediated trust.

2.2 Trust Repair

Research on trust can be grouped into six areas: 1) antecedents/preconditions for trust, 2) the process of trust-building, 3) contextual determinants of trust-building, 4) decision-making processes in trust, 5) implications and uses of trust, and 6) lack of trust, distrust, mistrust, and trust repair [28]. This last category is actually an ‘all-other-kinds’ category where different aspects of trust and trust repair, even those that are not closely related, are included. In i-voting research, trust repair has been out of researchers’ sight, which might be explained by the fact that i-voting has not been broadly adopted, and, therefore, the dominant body of research focuses on trust building as a prerequisite for the adoption of i-voting. Although initial trust-building is more prevalent in real-world practice than trust repair, it does not imply trust repair does not occur where i-voting is introduced or experimented with. It is already intuitively understood that establishing trust in i-voting is just the first step and that voters’ perceptions should be governed after the initial rollout of an i-voting system as part of “continuous supervision of actors’ perceptions regarding the Internet voting system” [36]. As trust repair has not been extensively studied in the i-voting domain, we are directed to look at ‘usual suspects’ from relatable research areas, primarily marketing, management, organisation, and civil society research. In those areas, various topics related to trust repair have captured scholarly attention, from the causes and consequences of trust violations and their multilevel character to the severity, intentionality, and timing of trust violations and the affected trustworthiness dimensions, to strategies and mechanisms for trust repair, as well as comparisons of the pre- and post-repair levels of trust [26]. Two approaches are used to study trust repair: the variance approach (the “what”) and the process approach (the “how”), with time as central to explaining how trust repair happens [3].

Trust repair “entails improvement in a trustor’s trust after it was damaged by a trust violation” [3]. More elaborately, it is a process directed at restoring cooperation between the trustor and trustee and making the trustor willing to be vulnerable again by re-establishing their positive expectations [21]. Trust repair is a response to reductions in the perception of one or more dimensions of trustworthiness (cognition, affect, behaviour, and intended behaviour) [2], which decrease the existing level of trust. Since there is no unifying or umbrella term, different terms are used to describe a decrease in the existing level of trust, such as trust violation [10], transgression [6], erosion [2], breach [30], or damage [22]. We will go with ‘trust violation’ as the used term in this article.

A shared assumption is that trust violations should be “repaired” or “fixed” if they emerge, if not avoided. This assumption reflects the social sciences’ understanding of trust as desirable, with benefits for both the trustor and trustee. Building trust is a lengthy and time-consuming process, while trust violations can occur unexpectedly and, within a short period, undo all previous efforts, overshadowing the established trust. In other words, “trust can take years to build but be lost in a day” [23]. When there are existing elements of distrust or reduced perceptions of trust, repairing trust may necessitate even more effort and time than initially building it [19]. The subjective perceptions of trust violations, which differ among stakeholders, coupled with the complexities of power relations and interests, make trust repair even more challenging [1].

What is more, trust violations are not confined to the particular transgressing organisation but often transcend the boundaries and spill over to other organisations in a sector, so even blameless organisations are affected and (have to) engage in trust repair and differentiation from organisations involved in trust violation [4]. An interesting area of trust repair research is preventive tactics at individual, organisational, and sectoral levels, which have a twofold purpose - to prevent the occurrence of trust violations by influencing potential causes or, if the violation still occurs, to mitigate the impact of the violation by making consequences less severe [6]. For instance, those tactics include training of staff, job-level checks and balances, and staff evaluations at the individual level, then audits, governance practices, and internal controls at the organisational level, and finally, sector-level regulation and oversight at the sectoral level [6].

It might be argued that the dominant narrative in extant research, even when it is not clearly and explicitly stated, is that trust repair is somewhat mechanical in its nature [4], as if it is a broken part of machinery that can be perfectly repaired so that no one notices any difference or it is an elastic band that is stretched and then returned to its initial position. This mechanistic view of trust repair is an oversimplification, if not a complete fabrication, for at least two reasons. First, there is a point of no return regarding trust violations, i.e. trust cannot always be repaired. And second, repaired trust is different (not necessarily of lesser quality) than unbroken trust. In other words, trust repair is much more like medical healing than mechanical repair [26]. The term ‘trust repair’ is widely spread in research with some exceptions (e.g. trust restoration), so we will stick with that term, bearing in mind that it is not mechanic repair one may think of when seeing this term.

Three related but distinct stage models help us explain how the process of trust repair itself is constructed in a series of steps or phases which follow each other in a consciously led trust repair process, and those models are utilised in organisational trust repair [21]. For trust repair in i-voting, the model formulated by Gillespie and Dietz [20] is particularly pertinent and effective. This approach strategically addresses trust violations, starting with an immediate response (step 1) and progressing to a diagnostic phase (step 2) that informs the development of reforming interventions (step 3), which are then evaluated for their effectiveness (step 4). In contrast, the two other stage models place more emphasis on the trustee’s acknowledgement of the trust violation and willingness to accept responsibility, accompanied by subsequent penance. In instances of trust violations, an immediate response is considered beneficial, at least to communicate acknowledgement of reduced trust perceptions and outline the steps necessary to identify the cause. However, this is not always the case, as responding to a trust violation does not necessarily require positive action from the trustee. Some research suggests that defensive strategies may benefit the trustee more in the short term than genuine communication [18]. A transgressing organisation might deny the occurrence of the trust violation, downplay the problem to minimise its significance [4], and continue with a business-as-usual approach [6]. The next stage in trust repair involves diagnosing the incident, determining the severity of trust damage, and identifying the affected trustworthiness dimensions to orchestrate effective trust repair, as “the context-specific nature of trust affects the choice of trust repair mechanism(s)” [2]. What works in one context or country may not work in another, as “an appropriate social ritual to restore a relationship is culturally and contextually bound” [1]. Interventions or mechanisms for trust repair are systematised into a framework for organisational and institutional trust repair consisting of six mechanisms by Bachmann et al. [1]. This framework is probably the most comprehensive of its kind and is widely cited in trust repair research, particularly in empirical studies. The trust repair mechanisms are as it follows: 1. Sense-making, 2. Relational, 3. Regulation and controls, 4. Ethical culture, 5. Transparency, and 6. Transference. Each trust repair mechanism has a common assumption, foci, and underlying mechanism, accompanied by practical examples of measures from real-world cases. The table below provides detailed descriptions of each trust repair mechanism Fig. 1.

Fig. 1.
figure 1

A framework of six trust repair mechanisms for repairing organizational and institutional trust, adapted from Bachmann et al. [1]

Trust repair measures do not function in isolation. Quite the contrary, they are interdependent and often rely on each other [1]. Therefore, before implementing trust repair measures, it is important to understand their interactions. Some measures can enhance the effects of others, while in some cases, they may have negative consequences. Hence, trust repair is a serious undertaking, and its management requires a systematic approach.

2.3 E-voting Frameworks

The proposed trust repair framework borrows its building blocks from established frameworks in i-voting research. We utilise Krimmer’s “The E-voting Mirabilis” [24] and its adapted version, “The Mirabilis of Internet Voting System Failure” by Spycher-Krivonosova [36], who integrates Toots’ information system failure framework [37] with Krimmer’s model. Elements from these concepts help us identify relevant stakeholders, their mutual relationships, and their roles in trust-building and trust-repair processes. “The E-voting Mirabilis” is a conceptual framework for the analysis of ICT in elections, comprising four dimensions affecting e-voting adoption (1. Technology; 2. Law; 3. Politics; 4. Society), and five stakeholder group that are of help in e-voting adoption (1. Citizens, Voters; 2. Politicians, Candidates; 3. Election Management; 4. Inventors, Vendors; 5. Media, Observers) [24]. Krimmer lists stakeholders as leaves of the e-voting mirabilis without any special division or relations among them. Spycher-Krivonosova adapts Toots’ information system failure framework with elements of Krimmer’s Mirabilis and divides stakeholders into two groups named “Stakeholders” (citizens, voters; politicians, candidates; media; observers) and “Project Organization” (election management and vendors) [36]. This distinction is made between those in charge of i-voting, i.e., stakeholders responsible for delivering elections, and other stakeholders outside this internal process.

3 Framework for Voters’ Trust Repair in Internet Voting

This framework integrates insights from trust repair research across multiple scholarly fields with established e-voting frameworks. By leveraging current knowledge on trust repair, we gain an understanding of how voters’ trust in i-voting is formed, the influence of external stakeholders on voters’ trust, the nature and impact of trust violations, and the basic propositions of trust repair. Furthermore, the integration of trust repair research findings allows us to delineate the step-by-step processes involved in trust repair, the mechanisms available for trust repair, and the comparative quality of repaired versus pre-violation trust. The established e-voting frameworks help identify stakeholders and their relations and clarify the roles of the trustor and trustee, as well as other stakeholders within the i-voting context Fig. 2.

Fig. 2.
figure 2

A Framework for Voters ’ Trust Repair in I-voting, integrating concepts from [1, 20, 24,25,26, 36]

The constitutive elements of the framework will be thoroughly explained in the subsequent subsections, but it is useful to provide a brief overview and outline the basic assumptions and relationships. Voters have two types of trusting beliefs toward i-voting: human-like beliefs and system-like beliefs. These beliefs are influenced by numerous stakeholders, given that many voters cannot fully understand the technological aspects of i-voting. Voters’ beliefs refer to both the human (i-voting organisation) and technological dimensions (i-voting system), forming the basis for their overall perceptions of i-voting. I-voting organisers have a dual role within this framework, acting as both trustees and trust repairers. This group includes decision-makers, electoral management bodies, and vendors. When these perceptions erode, a trust violation occurs, necessitating trust repair. Trust repair takes place when i-voting organisers respond to trust violations. To ensure successful trust repair, it is essential to assess the affected beliefs/dimensions, the available trust repair measures, how these measures interact, their suitability for specific trust violation areas, the appropriate mix of measures for each case, and any contextual specifics before beginning the trust repair process. Trust repair is overseen by i-voting organisers, who direct it toward both voters and external stakeholders, as trust in i-voting is intermediated trust.

The framework presented in this article is developed for politically binding elections. While it is tailored for this specific context, it may also be adapted appropriately for other forms of i-voting (e.g., interparty elections).

3.1 Voters

We adopt a macro perspective with voters as a collective of individuals eligible to vote in an election with their “collectively held trust perceptions” [5], which are conceptualised as generalised or aggregated trust. Trustors are voters as an enmassed group of people, with acknowledgement of their individual characteristics playing a role in trust building and trust repair, but going beyond individual determinants of trust (which are usually applied to trust in i-voting research, like TAM and UTAUT models [13]), with generalised trust in mind and trust repair as a systematic endeavour to improve the violated trust of a larger collective.

3.2 Trusting Beliefs

Trust in i-voting is conceptualised as a multifaceted, multilevel, and multi-relational construct involving human-like trust beliefs (integrity, benevolence, and competence) and system-like trust beliefs (reliability, functionality, and helpfulness) directed toward the respective human and technological dimensions of i-voting. Both sets of beliefs shape voters’ trust, as i-voting features a human component in its organisers and a technological component in its ICT infrastructure. This distinction in trusting beliefs can be applied to various aspects of i-voting systems, revealing potential areas for trust violations and underscoring the importance of considering both human and technological factors in trust repair management. Understanding the dual nature of trusting beliefs is crucial for identifying which beliefs are affected by trust violations so that an appropriate trust repair strategy can be tailored for each case.

Whether human-like or system-like beliefs weigh more and what is their proportion in building trust in i-voting is an open question. However, as mentioned, some researchers underline a degree of perceived technology’s humanness [25], which might suggest higher relevance of system-like beliefs because i-voting is, from the perspective of voters, a tool for expressing their preferences. Other researchers equally rightly argue that i-voting is a rather sophisticated technology and trust in i-voting rests on ‘the cue-taking approach’ used by voters, especially partisan voters [14], pushing trust in i-voting in the direction of human-like beliefs on the imaginary continuum of trust in i-voting with two opposite sets of beliefs on both ends. Nonetheless, human-like and system-like trust beliefs undoubtedly influence voters’ trust, and violations in either dimension necessitate trust repair.

3.3 I-voting

I-voting as a whole is divided into two subgroups: i-voting organisation (human dimension) and i-voting system (technological dimension), each corresponding to two distinct types of trusting beliefs.

I-voting Organisation

Three stakeholder groups fall into the i-voting organization category: decision-makers, electoral management bodies (EMBs), and vendors. These stakeholders represent the human dimension of i-voting, and voters’ human-like trusting beliefs are directed at them when assessing the trustworthiness of i-voting. All of these groups are involved in creating institutional and legal frameworks and managing i-elections, making them accountable for the performance of i-voting. They are not only the subjects of voters’ assessments, through which trust in i-voting can be built or violated, but they also stand on the front lines when trust violations occur and must manage trust repair.

Decision-makers possess the final authority to adopt, reject, or discontinue i-voting. In instances of trust repair, they are significantly engaged in sense-making by commissioning investigations into the causes of trust violations, usually with the assistance of a broad circle of stakeholders. These stakeholders may be internal, from within the i-voting organization, or external, potentially from other countries, which is relatively common. This investigation has twofold goals: to understand what went wrong and to give recommendations for improvements, resulting in amending existing or introducing new regulations and practices on i-voting. For instance, Switzerland was not successful in introducing i-voting as a general voting method in 2019, and before continuing or starting again with their i-voting project, a report followed by a broad consulation was published with a set of measures to be implemented in a redesigned i-voting project [16].

EMBs are engaged in day-to-day operations and have the best overview of the current state of an i-voting system. They employ staff from various backgrounds, including legal and computer experts. While their role in trust building is already recognised, they also play a crucial role in trust repair as first-line responders during times of uncertainty and trust violations. Their position as trustees necessitates a strategic approach to both preventing and addressing trust violations.

In developing i-voting solutions, EMBs often collaborate with companies that specialise in the technical aspects of i-voting. Given that the i-voting market is relatively small and resembles an oligopoly, these companies typically engage in other business activities or offer their services internationally. Such an international presence can result in trust violations in other markets where these private vendors operate, raising questions about the i-voting systems they support, even in countries where no trust violation has yet occurred. Potential trust violations attributed to a particular vendor can impact the specific i-voting system involved and cause a spillover effect on other i-voting projects the vendor manages in different countries. Such spillovers can influence the entire i-voting market, compelling stakeholders in other countries who are not at fault to differentiate themselves from the offending entity and undertake trust repair with their stakeholders [4].

I-voting System

An i-voting system is the technological basis of i-voting, encompassing all hardware and software elements required to run and support i-voting, as well as auxiliary applications and services. These supplementary components, although not necessary for the technological execution of i-voting, are incorporated to serve various purposes, including improving transparency and building voter trust, such as verifiability tools.

3.4 External Stakeholders

External stakeholders, as defined in this concept, are stakeholders external to an i-voting system in terms of the accountability of its development and functioning. In other words, these stakeholders are not responsible for election delivery but have the power to influence voters’ trust in i-voting by claiming that some internal stakeholders (i-voting organisation) or the i-voting system itself cannot be trusted, or if they question some aspect of trust in the human or technological dimension of i-voting. The trust of those stakeholders in i-voting is transferred to voters’ beliefs about i-voting and influences their trust in either a positive or negative way. External stakeholders can facilitate or hinder trust. In other words, they can either produce or induce trust damage or partner in trust repair.

In this framework, there are three additional modifications: voters as a central stakeholder group in the model, additional stakeholder groups added, and a regrouping of two stakeholder groups to “I-voting organisation” or “Internal stakeholders” and “External stakeholders”. A vast majority of the listed stakeholders come from Krimmer’s “Mirabilis of the E-voting model” [24] and its adapted version, “The Mirabilis of Internet Voting System Failure” by Spycher-Krivonosova [36]. The latter includes: 1. Media, 2. Observers, 3. Politicians, candidates, 4. NGOs, 5. Academia, independent experts, 6. International organisations, 7. Judiciary. The first three groups of stakeholders are the same as from Krimmer’s and Spycher-Krivonosova’s model, further complemented by four groups of stakeholders who play a notable role in I-voting but, for some reasons, have not yet been included in existing models. These stakeholder groups have influenced voters’ trust in e-voting and I-voting.

Political parties have significant influence on shaping voters’ trust in i-voting, vividly demonstrated in the Estonian example [14] - voters who vote for a party that supports i-voting are more likely to trust i-voting in comparison to those who vote for parties with a negative attitude towards i-voting, and this “partisan gap in trust” cannot be reduced to differences in the socio-demographic profiles of voters, with potential to lead to the polarization of trust and usage of i-voting along party lines. Moreover, i-voting has also been utilized in inter-party elections, where potential trust violations could impact voters’ confidence in politically binding elections [38].

The discontinuation of Dutch e-voting resulted primarily from a campaign by an NGO called “We Don’t Trust Voting Computers”. How academia and independent experts can play a role in voters’ trust was demonstrated in Estonia in 2014 when a group consisting of university researchers, e-voting observers, independent researchers, and advisors, led by Professor J. Alex Halderman, published a security analysis [35] of the Estonian i-voting system in which they identified significant vulnerabilities, demonstrating potential client-side and server-side attacks that could alter election outcomes or compromise voter secrecy. Their analysis highlighted the lack of end-to-end verifiability and inadequate procedural controls, leading to their recommendation to discontinue its use for the time being. On the other hand, these groups can positively influence trust with their engagement in monitoring i-voting.

International organisations are also important stakeholders, yet not recognised by current models. The Council of Europe has served as a forum for discussions and the exchange of experiences among countries engaged in the adoption of e-voting and i-voting from the early days. Moreover, the CoE’s role in setting international standards in e-voting (recommendations from 2004 [7] and 2017 [8]) can also impact voters’ trust - it is possible to test one’s own country’s i-voting against set international criteria and to signal that some of those standards are not (fully) respected, thus undermining trust among the electorate. The OSCE/ODIHR observes elections and publishes after-election reports that describe the overall electoral process, including the voting method, and provide their opinions and recommendations based on findings gathered during their observer mission. For instance, the report on the Estonian 2023 parliamentary elections contains recommendations aimed at election authorities for addressing election stakeholders’ concerns, implementing technical and organisational measures, and establishing transparency practices [31]. These publicly available reports create expectations among stakeholders as a to-do list for i-voting organisers, who are expected to tick off the given recommendations. This may create a perception of the recommendations as tasks rather than suggestions. Potential non-adherence to these recommendations can initiate a loop of trust erosion, starting within the expert community, which can subsequently be transferred to ordinary voters.

The last newly added group is the judiciary, a separate branch of power with a supervisory role in elections. Case law in e-voting is rich, and certain landmark judgments have influenced trust in e-voting [12].

3.5 Trust Violation

Trust violations can occur as a single event or as a series of events, both leading to a reduction in voters’ perceptions of i-voting. For example, the abandonment of the general i-voting roll-out in Switzerland, although mainly perceived as triggered by the detection of critical vulnerabilities in the source code, “has fuelled the already heated debate over the future development of internet voting in Switzerland” [11]. Trust violations in i-voting can be further divided by causes, type, severity, timing, intentionality, and consequences [26]. Types of trust violations correlate to three basic categories of human-like and corresponding system-like trusting beliefs (integrity vs. reliability, ability/competence vs. functionality, benevolence vs. helpfulness). Not all trust violations have the same impact on voters’ trust, and more severe trust violations must be addressed with greater urgency before the violation reaches a point where damage becomes irreparable, and the whole i-voting system is abandoned.

When it comes to the timing of trust violations, the magnitude of the violations and their consequences differ if they occur in the early phases of development of i-voting or early enough before elections. In 2014, Halderman’s report [35] was published just a couple of days before the European elections, which necessitated an immediate response from the Estonian electoral administration that completely denied their findings and tried to assure voters that everything was alright. Trust violations may also happen during the voting period, as occurred in Estonia in 2023, where minor issues such as a delay due to manual data upload and a mismatch in district data arose. These issues, attributed to human error and delayed updates, were promptly addressed by officials, ensuring that the integrity of the voting process was not compromised [27].

Accurate detection of the nature of trust violation(s) helps create a trust repair strategy based on „careful planning, coordination and combined implementation of trust repair mechanisms that best repair affected or important trustworthiness dimensions” [2].

3.6 Trust Repair in I-Voting

Trust repair in i-voting and other studied entities, most of which are service industry brands operating in the market, have to be distinguished. In i-voting, the competition is not with other providers of the same voting method but with other voting methods altogether. The question is not which alternative provider of i-voting voters will turn to in cases of trust violations, but rather which other voting method they will choose, and whether i-voting will survive at all. The high standards set for i-voting mean that maintaining trust is crucial, and violations of this trust can lead to significant, sometimes irreversible, consequences. When trust issues become too great or frequent, the simpler and more practical response is often to abandon i-voting altogether. This approach is favoured over the arduous and uncertain process of trust repair, reflecting the lower resilience of alternative voting methods to sustain trust violations. In simpler terms, new voting methods are seen as alternative, optional and have to be trustworthy, and demands for them are stricter than those for established voting methods, which are seen as default, as well as the mere fact that there is an alternative you can choose between, it introduces risk analysis and the situation of trust [32].

Trust repair is a thoughtful and directed process, not just a point in time, especially in the context of i-voting. It begins by opening communication channels and immediately responding to trust violations. This initial response should acknowledge the occurrence of the violation and assure stakeholders that necessary steps will be taken to identify the causes and repair trust. It is often not immediately clear what exactly went wrong, and it may take some time to determine what happened. Even so, the immediate effects of a trust violation can severely negatively influence voters’ trust. Following the initial response, the next step is diagnosing or sense-making of trust violations by understanding all their aspects. It is essential to ensure that everyone is aligned on what constitutes the trust violation(s), whether it is recognised as a violation by all, and the severity of the violation. Stakeholders might have dissimilar understandings of what happened, the event’s nature, its significance, whether there is a need for trust repair, and if so, how it should be done. Understanding stakeholders’ views is essential for designing effective trust-restoring measures and avoiding exacerbating the situation. Apart from perceptual distinctions or biases, power asymmetries, in this case, stem from disproportions in the specialist knowledge of the technological base of i-voting – understanding and thus having justified trust is reduced to “an elite intelligent few” [29]. Therefore, trust in i-voting is not only shaped by voters’ immediate interactions with the system but also by external stakeholders, primarily because of the complexity of i-voting.

The notion that relational trust repair measures are culturally and contextually bound has two repercussions for trust repair. First, in crafting a (relational) trust repair response, i-voting organisers should be aware of the cultural environment in which trust repair takes place. What works in Estonia may not work in Switzerland or France, and vice versa. For instance, the notion of voting privacy and remoteness differs in Switzerland, which has a positive experience with postal voting, and in Estonia, which uses the revote option with the last cast ballot counted. In contrast, in France, even if revoting could be introduced from a technological standpoint, it is not feasible due to cultural and legal constraints, as voting is framed as a one-off activity, and revoting would infringe on the solemnity of the voting act. The second repercussion is that trust violations might be framed differently by voters in different countries, to the extent that what constitutes a trust violation in one country might not even be perceived as such in another. Since trust violations are essentially perceptions, they reflect cultural specificities. Additionally, because i-voting trust violations can spill over to other countries, it is feasible to plan and design different social rituals suited to each specific country rather than adopting a one-size-fits-all mindset for trust repair.

Engaging renowned experts or other trusted entities in trust repair operations - appointing them to working bodies, giving them access to documentation, and involving them in sense-making, monitoring, and evaluating the success of trust repair - can endorse the trust repair process. This creates a dual source of verification: both i-voting organizers and independent experts. Moreover, trust repair is not the sole task of the transgressing organization but involves blameless organizations within a certain organizational field [4]. Trust violations can have spillover effects from one country to another, even if the affected i-voting system is not involved. In i-voting, spillover effects of trust violations can lead to decreased trust perceptions among voters in other countries where the violation did not occur. This is partly because technical weaknesses that triggered trust violations in one context may also be present in other i-voting systems. Additionally, new demands for implementing measures in one context can influence other contexts. For instance, verifiability has steadily become an indispensable part of i-voting practices, and even when it is not legally required, pressure from other countries can lead the public to demand its introduction. The rejection or poor implementation of such measures can undermine voters’ trust. Blameless organizers should monitor events in other contexts to learn from them, undertake preventive measures, engage in trust repair processes to address stakeholder concerns, and differentiate themselves from transgressing organizers. In such cases, an i-voting organiser should engage in trust repair to manage voters’ perceptions, and this should consist of shared trust repair strategies between the transgressing organisation and the blameless organisation and also should have a distinct differentiation strategy to distance themselves from the transgressing organisation [4].

As the mechanistic view of trust repair has been previously discarded, it is recognized that initial trust and repaired trust are not necessarily of the same quality – repaired trust can be at a lower level, the same, or even at a higher level. Time is an important variable, as repaired trust differs shortly after the repair intervention and in a longer-term perspective [26]. Evaluating the efficiency of trust repair, particularly by comparing pre- and post-levels of voters’ trust and assessing the trust repair strategy, is crucial for guiding successful trust repair.

Prevention of trust violations can be achieved through ‘recalibration practices’ [19] which help maintain the optimal level of trust through early interventions. These practices detect ‘cracks’ in trust before they become more serious and eventually convert to distrust, potentially reaching a point of no return. Transparency measures are indispensable for building trust in i-voting and preventing trust violations or mitigating their severity. The 2019 Swiss i-voting project was discontinued after significant security vulnerabilities were disclosed during a bug bounty program, allowing anyone to inspect the Swiss i-voting system’s source code [16]. Interestingly, a measure to build trust in i-voting, code disclosure, enabled the detection of security vulnerabilities and thus led to trust violation. Although it was a rather big blow for Swiss aspirations to generalize i-voting later that year, the author argues that the procedure through which vulnerabilities were found – the strategic disclosure of information through an organized bug bounty program – enabled the relaunch of the Swiss i-voting project soon after. Utilising transparency measures can mitigate the severity of potential trust violations by reframing the nature of trust violations from integrity-based or benevolence-based to competence-based, so such violations are perceived as the result of technical incompetence rather than a breach of integrity. If a trust violation affects competence, it is local and technical and can be resolved by improved practices, additional technical measures, enhanced skills, etc. On the other hand, integrity and benevolence-based violations might be much more detrimental and cast doubts on the overall intentions of the i-voting organization, thus leading to the point of no return with irreparably damaged trust of voters and other stakeholders. Therefore, the Swiss example demonstrates how transparency measures, i.e. strategic disclosure of source code, serve for continuous improvement and as a shield from more deleterious implications of integrity and benevolence-based trust violations.

4 Avenues for Future Research

Trust repair in i-voting is a nascent research area without any prior systematic account, which opens a range of potentially interesting and useful (sub)topics for future research. Each avenue would enhance the proposed framework, deepen our knowledge of trust repair in i-voting, and provide sound advice for i-voting practitioners.

The framework can facilitate case study research by providing theoretical lenses for detecting real-world trust breaches and following trust repair interventions in countries that offer i-voting to some extent to their voters. In the European context, that might be Estonia and France, but also Switzerland, with its long and rich history marked with ups and downs and the attempt to generalise i-voting in 2019. Those practices and experiences may help detect different causes, consequences, and nature of breaches, the response strategies applied to restore trust, and the level of trust before and after the breach. Trust repair strategies are not static, and it is important to understand how they are recalibrated and adjusted when Plan A does not go as planned.

Cross-cultural aspects of trust repair in i-voting can be studied by comparing trust repair management across those countries, with the potential to identify and separate generalities that hold for all cases from particularities related to a specific case. Trust violations can have transnational effects, where a trust violation in one country influences voters’ perceptions and trust levels in another. Research should explore the differentiation strategies employed by blameless stakeholders to repair trust in their country and distance themselves from violations elsewhere.

Although the presented framework emphasises voters as trustors, trust restoration can be applied to different stakeholder groups, and future research can focus on a particular stakeholder group as trustors whose trust is negatively affected and necessitates repair. Preventive measures are part of good trust repair management, and understanding how they are designed and how effective they are in preventing or mitigating trust breaches in the context of i-voting is another promising avenue.

Time is another aspect of trust repair that is getting more attention from researchers. The process approach is dynamic and sees time as central to explaining trust repair, whereas the dominant variance approach is static in nature [33]. Answering “how” and not only “why” can deepen our understanding of trust repair in i-voting. In the end, measuring the effectiveness of trust repair efforts is important to comprehend their impact on voters and other stakeholders, providing insights that can lead to more nuanced and effective trust repair strategies in the future.

5 Conclusion

While trust in i-voting is not a novel research topic, trust repair has yet to capture scholarly attention despite trust violations and subsequent trust repair in practice. This article introduces trust repair in the context of i-voting, presenting a systematic account through developing a conceptual framework for voters’ trust repair in politically binding elections. By integrating insights from various research domains within the i-voting context, this article advances the understanding of how trust can be repaired after violations occur. The framework maps out the interplay between human and technological dimensions of trust and the roles of internal and external stakeholders, offering a nuanced perspective on trust violations and trust repair in i-voting.

In the academic context, the article contributes to the literature by shifting the focus from trust building to trust repair. It systematically identifies the elements influencing trust repair, from detecting trust violations to the required trust repair responses. By grounding these insights in theoretical and practical considerations, the framework sets the stage for scholarly inquiry into trust repair in i-voting as a new subtopic of a broader and already established trust in i-voting research. Moreover, the article lists a range of possible research avenues in the realm of trust repair, which could enrich the understanding of trust repair in i-voting.

The framework provides i-voting organisers with the available ‘arsenal’ of trust repair measures. It explains how those measures correspond to and are appropriate for specific areas of trust damage, thus assisting i-voting organisers in developing trust repair management plans that include response strategies for trust violations and preventive measures to mitigate or avoid such violations. Understanding and embracing trust (repair) management as a proactive, rather than merely a reactive activity when trust violations occur, is of utmost importance for maintaining voters’ trust in i-voting, preventing potential trust violations, and successfully addressing those that occur.