1 Introduction

Coercion is one of those notions that is easier to understand than formally define, as it comes in many different shapes and forms. Generally, coercion incorporates all kinds of duress that can prevent people from voting freely while minimizing the possibility of undetected disobedience.

The threat is especially relevant for remote electronic voting, which happens in an uncontrolled and potentially coercive environment, but we also demonstrate attacks for the in-booth setting. Even though there are many distinct proposals for resisting, mitigating, or hampering the coercion threat, all of them require that the coercer cannot constantly control the voter nor intercept information sent over secure channels.

However, new tools like immutable blockchains, delay functions, time-based encryption, secret-input MPC smart contracts, trusted hardware, etc., have been developed to enforce certain types of honest behavior of participants. In this paper, we demonstrate how such tools in the hands of a coercer, in turn, can be used to ensure that the coerced voter follows the instructions of the coercer and cannot evade via anti-coercion strategies. Going further, these new tools can also be used by a voter to enable vote-selling.

The main difficulty in designing coercion-resistant or receipt-free verifiable voting protocols lies in combining those properties with the assurance that the voting device has not altered a voter’s vote – a check known as cast-as-intended (CAI) verification. To get cast-as-intended verification we can use tracking numbers, return codes, QR codes containing encryption randomness, zero-knowledge proofs of plaintext correctness, or other techniques. However, they all need to convince the voter only, not a coercer or a vote-buyer, e.g. be deniable. This implies that any potential coercion-resistant and cast-as-intended verification should provide correctness proof for a final cast vote and a simulation strategy.

Several studies focus on the contradiction between coercion-resistance and cast-as-intended verification and offer potential solutions, see, e.g., [16] and references therein. However, to our knowledge, no paper has thoroughly studied the possibility of the coercer utilizing new cryptographic tools - blockchain, delay functions, etc. - to prevent the voter from simulating an alternative proof.

For example, a coercer can use blockchain to force voters to vote for a specific candidate in a voting system that relies on so-called Benaloh challenges [4]: A voter enters her choice into the voting device, which then prepares a ballot, commits to it, and asks the voter whether to cast it or audit it. In case of an audit, the encryption randomness is revealed so the voter can verify the ballot on another device. Otherwise, the vote is submitted. Since the voter never holds the randomness of the submitted ballot, there is no receipt. Note that the number of audits should be unpredictable to prevent the voting device from cheating.

With the aid of a blockchain, a remote coercer can always force voters to cast a ballot for a given option. To do so, the coercer tells voters to post the ballot’s commitment on the blockchain, e.g., Bitcoin, before deciding whether to audit or cast the vote. If the next block starts with a bit 0, the voter must press audit and post the corresponding randomness on the blockchain. Otherwise, the voter casts the ballot. Since the coercer can see everything posted on the blockchain, he can always check if the voter behaved. Theoretically, the voter can disobey and commit to a ballot with a different candidate from the coercer’s preference. However, with the probability of 1/2, the disobedience would get caught.

Blockchain enforces the order of commitments and gives unpredictable randomness. Delay functions and time-lock encryption can ensure that a voter does not learn a secret until after some time, which can prevent the simulation of proofs. Privacy-preserving smart contracts, e.g., MPC-based [3, 30], Trusted Execution Environments, and other trusted hardware can ensure the voter never knows a secret key required for coercion mitigation. These tools can act in place of the coercer and interact with the voter during the vote-casting phase, while the coercer or vote-buyer only verifies all the evidence at the end of the election.

Since the new tools allow an adversary to control the voter without observing her continuously, coercion can be done at a large scale without substantial costs. Hence, it is critical to evaluate and discuss these new attack vectors.

1.1 Related Work

Exploiting ballot verification mechanisms for coercion is not new, and we here present related work. An attack similar to the attack mentioned in the introduction on Benaloh challenges was presented in [10], without using blockchain technology. We give the details in Sect. 3.1, and explain how the attack can be mitigated (i.e., the voter can disobey and conceal this fact), whereas our attack is not repaired.

An interesting coercion attack utilizing scratch-off cards was proposed in [21] against the Punchscan [29] two-part ballots. The number revealed after scratching will force the voter to reveal a certain ballot part - much like the attack on Benaloh challenges. However, scratch-off cards require a physical delivery and support only a limited range of options. Thus, it would be infeasible for many digital ballots. Going further, our attack also works against receipt-freeness, as we explain below, whereas for the code sheets this would require a voter to obliviously create scratch codes on behalf of a vote-buyer, which seems hard to achieve in practice.

Many voting schemes have been proposed using various blockchain primitives to achieve different forms of security, perhaps most famously [23] used smart-contracts to prevent denial-of-service to a decentralized voting scheme, and in [7] smart contracts were used to disincentivize vote-selling.

On the other hand, many schemes have been proposed that naively implement blockchain technology and claim security without properly understanding possible pitfalls and the alignment of incentives [25, 26].

Also note that parallel to our contribution is a blog post on vote buying in special DAOs [1] using SGX.

Finally, we note that we only consider coercion- and vote-buying rising from the vote-casting procedure. There has recently been improvements on the state-of-the-art for definitions of covering the full election, especially coercion attacks during the tally phase of JCJ [14]. See also [19] for recent definitions of receipt-freeness. However, this is out of scope for this paper.

1.2 Contribution and Organization of the Paper

We study unexplored coercion and vote-buying attacks based on new cryptographic primitives such as blockchain, delay function, time-lock encryption, privacy-preserving smart contracts, trusted hardware, etc. We give examples of new tools usage by showing how they help the coercer to force voters to comply or the voter to obtain a (probabilistic) receipt for vote selling. We will also present some attacks inspired by these, which will work even without these tools. Our last contribution will be the proposal of new security definitions for coercion-resistance and receipt-freeness that take into account the possibility that both the coercer and the voter can use such new tools.

We start by stating our model and trust assumptions in Sect. 2.1. First, we describe and list expectations for the new tools available to the coercer in Sect. 2.2 and categorize the coercion attack types in Sect. 2.3. Then, in Sect. 3, we proceed with attacks on the known e-voting verification methods and schemes. Section 4 contains new security definitions for the notions of coercion-resistance and receipt-freeness, and some relations between them. Finally, in Sect. 5, we summarize our observations and we briefly state some impossibility results we encountered and should be considered in future work.

2 Voter and Adversary Model

2.1 Parties and Communication Model

The parties involved in our protocol are the following. \(\mathcal{E}\mathcal{A}\) is the election authority, which is trusted for privacy and hence for coercion-resistance. \(\mathcal{B}\mathcal{B}\) denotes the bulletin board, which collects ballots and verifiably derives the tally result. \({\mathcal {V}}\) denotes the (single) voter; we consider only a single voter in this work because we are only concerned about the verifiability and coercion-resistance of the vote-casting procedure. \(\mathcal {C}\) is the adversary against coercion-resistance. \(\mathcal {V}\mathcal {D}\) denotes the voting device, which helps the voter to prepare the ballot and sends it to \(\mathcal{B}\mathcal{B}\); it is assumed to not be colluding with \(\mathcal {C}\).

In our model, we consider a voter without any knowledge pre-shared with \(\mathcal{E}\mathcal{A}\) except public election parameters available on \(\mathcal{B}\mathcal{B}\). We assume \(\mathcal {C}\) can observe the ballot that \({\mathcal {V}}\) sends to \(\mathcal{E}\mathcal{A}\), e.g., because it is directly published on \(\mathcal{B}\mathcal{B}\). We will assume that \(\mathcal {C}\) is not present during the vote casting since otherwise the voter would not be able to fake her view, but \(\mathcal {C}\) can give instructions before and get information after the session to verify if the voter followed instructions.

2.2 The Coercer’s Toolbox

We consider different cryptographic means that the coercer or vote buyer can use to control the voter without being present in the vote-casting situation. We assume the voter wants to vote for \(\textsf {Cand}_{\mathcal {V}}\) and the coercer expects \(\textsf {Cand}_\mathcal {C}\).

  • \(\textsf{Instr}\): Instructions that \(\mathcal {C}\) gives to \({\mathcal {V}}\) before voting. We assume \({\mathcal {V}}\) already knows the preferred candidate of \(\mathcal {C}\), but \(\textsf{Instr}\) will provide more details.

  • \(\textsf{CC}\): Chain of commitments. The committed values are add-only and immutable. This can be done, for example, via a blockchain or a hardware device that stores input from the voters.

  • \(\mathsf {CC-PRF}\): Chain of commitments with (pseudo-)random output between commitments. One example could be Bitcoin, with the hash pointers treated as pseudorandom output. Another option is a hardware device taking inputs \(x_i\) and returning \(y_i=\textsf{H}(x_i||y_{i-1}||\textsf {sk})\), where \(\textsf{H}\) is a cryptographic hash function, and \(\textsf {sk}\) is a secret key only known to \(\mathcal {C}\). Knowing the last output and inputs, \(\mathcal {C}\) can verify the entire transcript without asking the hardware token back.

  • \(\mathsf {Timed-CC}\), \(\mathsf {Timed-CC-PRF}\): Timed chain of commitments without/with pseudo-random output. Similar to \(\textsf{CC}\) and \(\mathsf {CC-PRF}\), but also commitments are time-stamped. A blockchain or a hardware token with timings would suffice.

  • \(\mathsf {Timed-Enc}\): Timed release of secrets. It can be done via Time-Lock-Puzzles [31], Delay Encryption [8], Homomorphic Time-Lock Puzzles [22], etc.

  • \(\textsf{Token}\): Tamper-proof hardware token. It gets inputs from the voter and can give outputs, record timings, store secret values known only by the coercer, and generate public keys while keeping private keys safe in the module. The coercer can ask the voter for the full transcript of inputs and outputs from the device and verify everything without receiving the token back. This can be done, via a Trusted Platform Module (available on most modern laptops, PCs, and smartphones), a Trusted Execution Environment, general trusted hardware, or privacy-preserving smart contracts (e.g., MPC-based versions).

We do not claim this to be an exhaustive list of tools, and would expect new tools to emerge in the future, but our methodology in selecting these, has been to look for methods used to enforce certain honest behaviours.

2.3 Coercion Attack-Types

We also classify different typ es of attacks according to their severity and difficulty

  • \(\mathsf {Attack{:}Precision}\): An attack we can carry out with a probability that can be made close to 1.

  • \(\mathsf {Attack{:}Probabilistic}\): An attack where the coercer has a certain probability to carry it out, but this probability is not close to 1.

  • \(\mathsf {Attack{:}Complex}\): An attack where the coercer has to estimate a bound on the computational power of \({\mathcal {V}}\), e.g., for the delay time in the primitives in \(\mathsf {Timed-Enc}\) or the number of devices that \({\mathcal {V}}\) has.

3 Attacks

We the above tools at hand, we investigated how a coercer could use these to attack CAI mechanisms found in the e-voting literature. The attack impact varies from completely breaking privacy to computationally penalizing voters for using disobedience strategies. The attacks are categorized based on the type and the coercer’s tools.

3.1 Benaloh Challenges

As mentioned in the introduction, the Benaloh challenges are a perfect example to demonstrate the different coercer tools and estimate how easily a coercer can attack multiple voters. For a detailed description please refer to [2, 28].

Whereas Benaloh challenges were never claimed to be coercion-resistantFootnote 1 these attacks were not considered earlier. Even further, it is generally believed that Helios is receipt-free if the software does not leak the random coins, see e.g. page 3 of [9], which could e.g. be enforced using a hardware root of trust.

(\(\mathsf {Attack{:}Complex};\textsf{Instr}\)). An attack fitting our narratives was proposed for a polling booth-Helios [10]. We believe it would work for remote voting: \(\mathcal {C}\) tells the voter to vote only if the receipt hash h fulfills some predicate P(h) (e.g., the number of leading null bits which happens with some probability p) and audit otherwise. Then \(\mathcal {C}\) demands to see all audited receipts and random coins. The attack can be avoided but requires double effort: the voter first uses the coercer’s choice to obtain verification data, then (instead of casting the vote when receipt permits it) switches to the preferred option and re-runs the voting process until receipt allows vote-casting. Of course, all audit material corresponding to the voter’s choice must be destroyed.

(\(\mathsf {Attack{:}Precision};\textsf{Instr},\mathsf {CC-PRF}\)) The coercer instructs the voter to use the \(\textsf {Cand}_\mathcal {C}\), then add a commitment to the ballot that the voting device shows to \(\mathsf {CC-PRF}\) and only cast if the \(\mathsf {CC-PRF}\) output starts with 1 (alternatively: 0 or more complex predicate). The expectation is that the voter cannot predict when the \(\mathsf {CC-PRF}\) will allow casting the ballot; thus, she does not know when it’s safe to misbehave and use her preference. The coercer can always check that the commitment of the casting vote was added to the \(\mathsf {CC-PRF}\) and resulted in the output indicating the case. Therefore, the voter has a high risk of being caught in the case of disobedience. In case of just checking the first bit this probability is \(p=1/2\), but the coercer can increase this to a general probability p the cost of the voter having to do \(1/(1-p)\) vote cast attempts on average.

Note that a voter with a \(\mathsf {CC-PRF}\), e.g. access to Bitcoin, also can use this to get a receipt of the vote, i.e. Helios is not receipt-free even with trusted software.

On a high level, the first previous attack (suggested in [10]) looks very similar to the second one (proposed by ourselves), with the only distinction being the use of blockchain. However, we claim this is not the case. To see why, one should observe that in the polling-booth-Helios the voter receives an electronic hash of her receipt as a commitment from the machine. This commitment is not publicly posted or stored anywhere. It is given to the voter in the privacy of the voting booth. Therefore, a realistic coercer (i.e. one who cannot compute the exact amount of time spent by the voter during vote casting) would have no way of knowing exactly how many hash commitments the voter received and would not notice if a few were not used. Thus, the voter can destroy receipts indicating audit and only show the coercer the receipt that allows casting. Unfortunately, omitting some of the receipts would be impossible with the blockchain attack as it is specifically designed to preserve the immutability of records.

3.2 STAR-Vote

For CAI verification, STAR-Vote [5] offers a novel variant of Benaloh’s challenge: the voter either deposits the ballot in the ballot box or not. First, the voter makes selections on a terminal, which prints the paper ballot in human-readable form with a random serial number and a corresponding receipt that the voter might take home. The voting terminal also sends the encrypted vote and the receipt to the judge station and publishes the commitment to the ballot on a publicly bulletin board. If the voter chooses to cast the vote, she takes the paper ballot to the ballot scanner, which reads the ballot’s serial number and marks it as complete. If the voter decides to spoil the vote, she should return to a poll worker, who scans the vote and indicates it is spoiled. Such a vote would be decrypted during the tally. The verification mechanism works like the original Benaloh challenge: the voting terminal commits to the ballot before it knows whether the voter decides to cast or spoil it.

(\(\mathsf {Attack{:}Probabilistic};\textsf{Instr}\)) The coercer tells the voter to cast their ballot only if the printed receipt starts with some predicate, say a bit’0’. Otherwise, the voter must spoil the vote and give the receipt to the coercer. For our example, the chance of an audit is 1/2, but it can vary depending on the complexity of the predicate. Regardless, the voter cannot predict when the vote-casting happens and thus must take a risk or obey. However, if the voter disobeys and the receipt indicates spoiling the ballot, the coercer can trivially detect misbehavior by checking the decrypted spoiled ballot.

3.3 Belenios-CAI

Belenios [13] is built upon the Helios and recently obtained CAI verifiability [12]. After the voter selects a vote v, she receives two random integers a and b such that \(b = v +a (\bmod \mu )\) for some positive \(\mu \) larger than the biggest possible v. Then, the ballot is formed as three ciphertexts encrypting values v, a, and b, plus a zero-knowledge proof that \(b = v +a (\bmod \mu )\). After that, the voting device commits to the ballot and asks the voter to choose if the ciphertext encrypting b or a should be opened. The selected ciphertext is publicly opened. To modify v and create a convincing zero-knowledge proof, one has to change both v and one of the values a or b; therefore, the voter will detect it with probability 1/2. For a detailed description please refer to [12].

We stress that, as far as we know, Belenios-CAI has never claimed to enjoy receipt-freeness. It only highlights that revealing only one of two values does not affect the privacy of the vote but says nothing about vote-selling or coercion. Mostly, this is because the Belenios voting family defines receipt-freeness in the strong sense, where the voter can forcefully extract randomness from the voting device to facilitate vote-selling. However, in our model, we trust \(\mathcal {V}\mathcal {D}\).

(\(\mathsf {Attack{:}Probabilistic};\textsf{Instr},\mathsf {CC-PRF}\)) The coercer \(\mathcal {C}\) instructs the voter \({\mathcal {V}}\) to use the \(v=\textsf {Cand}_\mathcal {C}\), commit all values generated by \(\mathcal {V}\mathcal {D}\) to \(\mathsf {CC-PRF}\), and choose between a or b based on the \(\mathsf {CC-PRF}\)’s output. Theoretically, voter can receive \((c_v, c_a, c_b, a, b)\) corresponding to \(\textsf {Cand}\), then set \(b^* = (\textsf {Cand}_\mathcal {C}-a)\) and post \((c_v,c_a, c_b, a,b*)\) on the \(\mathsf {CC-PRF}\). However, if the output of the \(\mathsf {CC-PRF}\) indicates to open \(c_b\), then the coercer would notice the disobedience. Again this attack can also means there is no receipt-freeness for a voter with access to \(\mathsf {CC-PRF}\).

3.4 Themis

Closely related to Belenios-CAI is the in-person voting scheme Themis [6], which uses the same idea of splitting the candidate number v, which is always odd, into randoms a and b, ensuring that \(v = a + b\, \bmod 2n\) (n is the number of candidates) and verifying the encryption of one of the numbers. However, the voter gets this splitting on a printed ballot and chooses which side to audit.

(\(\mathsf {Attack{:}Probabilistic};\textsf{Instr}\)) Assume the voter can compute a boolean function f in the head. The voter in the booth computes f(ab) in the head and audits the left or right side according to the value. For example, assume that \(f=0\) indicates opening a while \(f=1\) says audit b. If the voter votes for \(v=\textsf {Cand}_{\mathcal {V}}\) and gets a and b such that \(v\equiv a+b\) but then claims to have selected \(v^* = \textsf {Cand}_\mathcal {C}\), she needs to fake a and make sure that \(f(a, v^*-b)=0\) or fake b and ensure that \(f(v^*-b,b)=1\). If f is random, then it can happen with probability 1/4. It might not be high, but it is an interesting observation and could be enough to have a monetary incentive for a vote buyer.

(\(\mathsf {Attack{:}Probabilistic};\textsf{Instr}\)) A better and easier attack is as follows. The possibilities for a and b such that \(a+b = v\, \bmod 2n\) are depending on v. Consider a simple case of \(n=2\) candidates (e.g., “A" and “B") with assigned numbers 1 and 3. Then for the candidate A the possible codes are (0, 1) and (2, 3), and for B – (0, 3) and (2, 1). If the coercer demands the audited number to be 0 or 1, voting for B always allows compliance with the demand. However, voting for A would result in \((a,b)=(0,1)\) only in 1/2 of cases. Thus, if the voter votes for A, the coercer will find out with the probability of 1/2. Note that the attack can scale to more candidates if the coercer demands computing numbers modulo 4.

3.5 Proof of Correct (Re-)Encryption

Voting schemes often offer the voter a proof of correct encryption or re-encryption (of the ciphertext that contains the chosen option) as a CAI verification method. Of course, such proof should be interactive, or else the coercer can demand to see it. Moreover, the proof should have full zero-knowledge and not merely honest-verifier zero-knowledge if one wants to avoid coercion. We have identified three different proposals for verification based on (re-)encryption correctness that can be attacked by a coercer with new tools: two protocols in [17] and one protocol in [24]. The three attacks are described in the long version [18] of this work; here we describe the first one.

Authors of [17] analyze a \(\Sigma \)-protocol for proving encryption correctness with an initial commitment to the challenge (so, four rounds of communication in total) and conclude it is both coercion-resistant and CAI. However, we will show how such a scheme can be attacked using new coercion tools.

The public parameters of the election system must contain elements (qGgh) such that \(G = \langle g \rangle = \langle h \rangle \) has prime order q. To commit to the challenge, the perfectly hiding Pedersen commitment scheme [27] is used. In Step 1, voter samples \(e,\hat{r} {\mathop {\leftarrow }\limits ^{\small {R}}}{\mathbb {Z}}_q \), computes \(Z = g^e \cdot h^{\hat{r}}\) and sends \((Z,\textsf {Cand})\) to \(\mathcal {V}\mathcal {D}\). In Step 2, \(\mathcal {V}\mathcal {D}\) samples \(r,t {\mathop {\leftarrow }\limits ^{\small {R}}}{\mathbb {Z}}_q\), computes \(\texttt {C}= (c_1,c_2)=(g^r,\textsf {Cand}\cdot \textsf {pk}^r)\) and \(a = (A_1,A_2 )= (g^t, \textsf {pk}^t)\), so that values \((\texttt {C},a)\) are sent back to \({\mathcal {V}}\). In Step 3, \({\mathcal {V}}\) replies with \((e,\hat{r})\). Finally, \(\mathcal {V}\mathcal {D}\) checks that \(Z = g^e \cdot h^{\hat{r}}\), computes \(z = t + e \cdot r \mod q\) and sends z to \({\mathcal {V}}\), who can verify that both \(g^z = A_1 \cdot c_1^e\) and \(\textsf {pk}^z = A_2 \cdot \left( \frac{c_2}{\textsf {Cand}} \right) ^e\) hold.

(\(\mathsf {Attack{:}Complex};\textsf{Instr},\mathsf {Timed-CC-PRF},\mathsf {Timed-Enc}\)) Shortly before the voting phase, the coercer \(\mathcal {C}\) gives the voter \({\mathcal {V}}\) commitments \(Z = g^{e}h^{\hat{r}}\) and the corresponding openings under delay encryption \(X = \textsf {Delay}(\hat{r} || e)\), which can be opened only after time \(\mathcal {T}\). The voter is ordered to commit to the ciphertext \(\texttt {C}= (c_1,c_2)\) and the first move of the sigma protocol \(a = (A_1, A_2 )\) using timed commitment chain of the coercer’s choice \(\mathsf {Timed-CC}\) before time \(\mathcal {T}\). We note that to prevent pre-computation by the voter, the coercer could use timed encryption like [15] to release the puzzle at a precise time.

One can consider a modified protocol (with five rounds, started by \(\mathcal {V}\mathcal {D}\)) where (i) the generator h for Pedersen commitments is not fixed in the public parameters, but instead chosen by \(\mathcal {V}\mathcal {D}\) in step 1 of the protocol, and (ii) the commitment sent by \({\mathcal {V}}\) in step 2 is defined as \(Z = g^{\hat{r}} \cdot h^e\) instead. This is actually the specific instantiation of the protocol proposed in the Appendix of [24]. The coercion strategy based on combining a blockchain and a delay function does not seem to work against this modified protocol; giving some formal proof of the security of this protocol is left as future work.

3.6 Civitas

Civitas [11] is a modification of the JCJ electronic voting protocol proposed by Juels, Catalano, and Jakobsson [20]. They are considered two of the voting schemes enjoying the strongest level of coercion-resistance. The coercion-evading strategy is based on the fact that a voter can compute and show fake credentials to the coercer, whereas he uses real credentials for the desired vote casting. Fake and real credentials are indistinguishable because real credentials are verified using a designated verifier technique, which takes as input an ElGamal public designation key \(K_{V_E}\) of the voter (different from the voter’s registration key used, among others, for authentication purposes). The voter can use the secret key \(k_{V_E}\) to compute the (indistinguishable from real) fake credentials. However, if a coercer can force a voter to use a specific public key \(K_{V_E}\) without knowing the matching secret trapdoor \(k_{V_E}\), then the voter cannot resist coercion.

Even a modification of Civitas where the voter is requested to prove, in zero-knowledge, that he knows the trapdoor \(k_{V_E}\) could be vulnerable to our new coercion tools. Moreover, these attacks do not seem to contradict Trust Assumption 1 of Civitas: The adversary cannot simulate a voter during Registration. On the one hand, the attacks we propose are off-line: the coercer gives \(K_{V_E}\) to the voter before Registration starts. On the other hand, coercion involves only the designation keys and not the registration keys (which are the focus of all the discussion about this Assumption 1 in [11]).

3.7 Voting Based on Trusted Computing

Smart and Ritter proposed a coercion-resistant protocol [33] based on trusted computations (specifically, the TPM and Direct Anonymous Attestation protocol). It consists of three phases: registration, where the voter has to prove their identity in person; joining, where the voter uses a trusted TPM to receive a certificate confirming eligibility; and signing, where the trusted TPM signs the vote. The authorities re-encrypt the ballot before publishing and send the voter a designated proof of re-encryption. If the voter is coerced and does not want to send the coercer’s ballot, she can send a different ballot instead and use her designated key to simulate the re-encryption proof for the coercer.

(\(\mathsf {Attack{:}Complex};\textsf{Instr},\mathsf {Timed-CC-PRF},\mathsf {Timed-Enc}\)) As a part of the protocol, the voter (not a trusted TPM!) is supposed to generate a fresh Elgamal key pair \((s_v, h_v = g^{s_v})\), which is her designated key. Without \(s_v\), the voter cannot simulate a re-encryption proof, which is why it is a crucial component of the coercion-resistance strategy. However, with the new tools, the coercer can give the voter a pre-generated pair \((\textsf {Delay}(s_v), h_v)\), hidden by the delay function \(\textsf {Delay}\) that cannot be opened before time T, and demand the re-encryption proof before time T. The voter will have no choice but to obey.

A similar attack applies to a version of BeleniosRF [9] where voters generate their signing keys and register the public part with the registrar. As a side observation, we think an untrusted election authority generating public parameters pp can undetectably modify ballots of this particular version of BeleniosRF.Footnote 2

4 New Security Definitions

The attacks presented in this paper have demonstrated that it is necessary to make a more general definition of receipt-freeness and coercion-resistance for the vote-casting phase to take into account the new tools for coercers and vote-buyers. We will first give a game-based definition without the new tools and then introduce these as oracles that can be used by the coercer and voter. A formal definition of the of cast-as-intended verifiability can be found in [32] and the long version of this paper [18].

We now note that most systems either can be analyzed in our setting or will have CAI verifiability based on the assumption that a secret key, e.g. a signing kay, is not being leaked to \(\mathcal {A}_{\textrm{ver}}\) or using some trusted party.

Our settings also include schemes that achieve receipt-freeness or some coercion-resistance using deniable re-voting or vote updates, e.g., re-randomization as in Belenios-RF [9]. To see how our definition can be extended to cover those cases, please refer to the long version of this paper [18].

4.1 Vote Casting Phase Coercion-Resistance

We consider Coercion-Resistance for the Vote Casting Phase, \(\mathsf {VC- CR}\), which is a necessary condition for achieving coercion-resistance for the full voting system considering vote submissions from all voters and information leaks from the tally.

The definition is in terms of an experiment \(\textsf{Exp}_{{\mathcal {A}}, {\mathcal {V}}}^{\mathsf {VC- CR}, 1}(\mathrm {\lambda })\) given in Fig. 1, where the coercer, \({\mathcal {A}}\), can give instructions, \(\textsf{Instr}\), to the voter before vote-casting. Vote-casting is done using a vote-device \(\mathcal {V}\mathcal {D}\). To be general, this is modeled as an oracle \(\mathcal {O}_{\textsf {state}_{\mathcal{V}\mathcal{D}}}{\mathcal{V}\mathcal{D}}\) with a state \(\textsf {state}_{\mathcal{V}\mathcal{D}}\) which is updated during the interaction between the voter and device. We assume that the instruction \(\textsf{Instr}\) uniquely defines an algorithm \({\mathcal {V}}_{\textsf{Instr}}^{\mathcal {O}_{\textsf {state}_{\mathcal{V}\mathcal{D}}}\mathcal {V}\mathcal {D}}\) which models what the voter does when following the instructions of the adversary. The adversary has to distinguish the output from this compared to the case where the voter casts her own vote using some coercion-evasion strategy, denoted \({\mathcal {V}}\), which we will assume is public, normally given as part of the voting scheme. In both cases, the voter can output a message \(\textsf{msg}\) to the coercer, which can include the (faked) \(\texttt {View}\) between the voter and the voting device plus auxiliary information such as random coins.

We have kept \(\textsf{Instr}\) and \(\textsf{msg}\) abstract since they depend on the voting protocol and values obtained when accessing the new tools. When proving a specific protocol secure they should be made specific to facilitate the security proof.

As mentioned above, the coercer will get access to the ballot \(\textsf{ballot}\) produced by \({\mathcal{V}\mathcal{D}}\) in the end. We get this ballot from the final state of \({\mathcal{V}\mathcal{D}}\) using the algorithm \(\textsf {Vote}\). Finally, we extract the underlying vote enclosed in the ballot using the algorithm \(\textsf{Extract}\). We use this to ensure that the voter following the coercion-evasion strategy really casts the preferred vote \(\textsf {Vote}_{\mathcal {V}}\), and we require that the voter following instructions casts the coercer’s choice \(\textsf {Vote}_\textsf {coerc}\), i.e., we do not consider randomisation attacks or forced abstention. The latter is impossible to protect against when the coercer sees the output ballot. The ballot randomisation attacks are interesting but outside the scope of this paper, but could be modelled using a similar type of definition allowing \(\textsf {Vote}_\textsf {coerc}\ne \textsf {Vote}_{\mathcal {V}}\).

We use abbreviations for the constraints in the game code on the vote choices using \(\textsf{Require} \, \cdot \) which stands for ‘ if not \(\cdot \) then \(\mathsf {Stop\,} \textsf{with} \, \bot \) ’ and \(\textsf{Promise} \, \cdot \) which stands for ‘ if not \(\cdot \) then \(\mathsf {Stop\,} \textsf{with} \, \top \) ’.

Definition 1

(Vote Casting Phase Coercion-Resistance) The protocol \(\textsf {Vote}\) enjoys Coercion-Resistance for the Vote Casting Phase, \(\mathsf {VC- CR}\), if there exists a PPT voter algorithm \({\mathcal {V}}\) such that for all vote choices \(\textsf {Vote}_{\mathcal {V}}\ne \textsf {Vote}_\mathcal {C}\) and for any polynomial-time adversary \({\mathcal {A}}\) we have that \(\textsf{Adv}^{\mathsf {vc-cr}}_{{\mathcal {A}}}(\mathrm {\lambda }) = \biggl | \textrm{Pr}\big [{\textsf{Exp}_{{\mathcal {A}}, {\mathcal {V}}}^{\mathsf {VC- CR}, 0}(\mathrm {\lambda })}\big ] - \textrm{Pr}\big [{\textsf{Exp}_{{\mathcal {A}}, {\mathcal {V}}}^{\mathsf {VC- CR}, 1}(\mathrm {\lambda })}\big ] \biggr |\) is a negligible function of the security parameter \(\mathrm {\lambda }\).

Fig. 1.
figure 1

On the left, the experiment for Coercion-Resistance for the Vote Casting Phase, \(\mathsf {VC- CR}\), and on the right, for Receipt-Freeness, \(\mathsf {VC- RF}\). Below the vote-device oracle.

There are many variations of this definition, see [18] for a more exhaustive list,

  • Minimal Instructions, \(\mathsf {VC- CR}(\mathsf {Min- Instr})\): Here the coercer just needs to output the desired vote \(\textsf {Vote}_\mathcal {C}\) as instruction.

  • Known coerced vote, \(\mathsf {VC- CR}(\mathsf {Known- Vote})\): Note that \(\textsf {Vote}_\textsf {coerc}\) is not explicitly given to the voter algorithm \({\mathcal {V}}\) to model that the voter might just get some instructions without knowing what the desired vote of the coercer is. A weaker definition can be made where \(\textsf {Vote}_\textsf {coerc}\) is known to the voter.

  • No secret in instruction, \(\mathsf {VC- CR}(\mathsf {No- Secret- in- Instr})\): In this case it is not necessary for the coercer to keep secrets from the coerced voter. We model this as \({\mathcal {A}}_1\) being a deterministic algorithm using some random tape r and which is included as part of the instructions \(\textsf{Instr}\) together with the algorithm \({\mathcal {A}}_1\). In this case \(\textsf {state}_{\mathcal {A}}\) used by \({\mathcal {A}}_2\) can just be \(\textsf{Instr}\).

  • No secret for classifier, \(\mathsf {VC- CR}(\mathsf {No- Secret- in- Classifier})\): We can make a weaker definition where the coercer does not need to remember a secret used in the instructions to classify whether the coerced voter follows instructions or not. We model this by not giving the \(\textsf {state}_{\mathcal {A}}\) to the algorithm \({\mathcal {A}}_2\) where adversary decides which world he is in, but only the instructions \(\textsf{Instr}\). This means the verification could be done by any party just knowing the instructions from the coercer. We denote this \(\mathsf {VC- CR}(\mathsf {No- Secret- in- Classifier})\).

  • Finally, we note that we can combine the above definitions. E.g. we can have \(\mathsf {VC- CR}(\mathsf {Known- Vote},\mathsf {No- Secret- in- Instr},\mathsf {Minimal- Classifier})\) for the case where the voter knows the coercer’s vote choice, there are no secrets in the instruction, and no secret or instructions used in the classifier.

4.1.1 Oracles for the Coercer’s Toolbox.

To model coercion via the new tools, we slightly modify the security game by giving the coercer and the voter access to extra oracles. For space reasons, we only mention two oracles, see [18] for a long list. An oracle \(\mathcal {O}_\mathsf {Timed-CC}\) captures an append-only immutable chain of commitments with time stamps. It starts with an empty list \({\textbf {List}}_0\) and supports two functions \({\textbf {TimedCommit}}(x)\) and \({\textbf {Return}}\). Upon being called an i-th time, \({\textbf {TimedCommit}}(x)\) calls \({\textbf {RequestTime}}\) from a global time oracle to get current time t and updates the internal list \({\textbf {List}}_{i} \leftarrow {\textbf {List}}_{i-1} || (x || t)\). \({\textbf {Return}}\), upon being called an i-th time, outputs the list \({\textbf {List}}_i\). The oracle \(\mathcal {O}_\textsf{TPM}\) captures the functionality of a trusted platform module. The exact interface can depend on the particular emulated module, but it can run a program determined by the coercer without leaking internal secrets, and it allows the voter to make inputs to this program and get the outputs.

4.2 Receipt-Freeness

Receipt-freeness for vote-casting phase \(\mathsf {VC- RF}\) is defined much like corcion-resistance using the experiment \(\textsf{Exp}_{{\mathcal {A}}, {\mathcal {V}},\textsf {Sim}}^{\mathsf {VC- RF}, b}(\mathrm {\lambda })\) in Fig. 1. The point of our definition is that for any (malicious) voter algorithm \({\mathcal {V}}\) trying to obtain a receipt in the form of some information \(\textsf{msg}\) for her vote \(\textsf {Vote}_\textsf {sell}\), there exists a simulator that casts a vote for another choice \(\textsf {Vote}_\textsf {own}\) but gives information \(\textsf{msg}\) which is indistinguishable from the claimed receipt. It models that the voter tries to cheat a coercer or vote buyer by voting for another vote option.

Definition 2

(Vote Casting Phase Receipt-Freness) The protocol \(\textsf {Vote}\) enjoys Receipt-Freeness for the Vote Casting Phase, \(\mathsf {VC- RF}\), if there exists a simulator \(\textsf {Sim}\) such that for all vote choices \(\textsf {Vote}_\textsf {sell}\ne \textsf {Vote}_\textsf {own}\), for all PPT algorithms \({\mathcal {V}}\) (corresponding to a malicious voter trying to obtain an receipt) and for all polynomial-time adversaries \(\mathcal {A}\) we have that \(\textsf{Adv}_{\mathcal {A}}^{\mathsf {vc-rf}}(\mathrm {\lambda }) = \biggl | \textrm{Pr}\big [{\textsf{Exp}_{{\mathcal {A}}, {\mathcal {V}}}^{\mathsf {VC- RF}, 0}(\mathrm {\lambda })}\big ] - \textrm{Pr}\big [{\textsf{Exp}_{{\mathcal {A}}, {\mathcal {V}}}^{\mathsf {VC- RF}, 1}(\mathrm {\lambda })}\big ] \biggr |\) is a negligible function of the security parameter \(\mathrm {\lambda }\).

Again we can define variants of this definition, see [18].

4.2.1 Implications and Separations Between Definitions.

In the long version of this paper [18] we show relations between the different flavour of definitions and give separating examples. Especially we give an (informal) proof of the following theorem, which demonstrates the usefulness of the definitions.

Theorem 1

Receipt-freeness \(\mathsf {VC- RF}\) implies the constrained coercion-resistance \(\mathsf {VC- CR}(\mathsf {Known- Vote},\mathsf {No- Secret- in- Instr},\mathsf {Minimal- Classifier})\) and with access to a timed commitment oracle \(\mathcal {O}_\mathsf {Timed-CC}\) (see Section 4.1) they are equivalent.

The proof follows by relating the simulator, \(\textsf {Sim}\), in \(\mathsf {VC- RF}\) to the coercion-mitigation algorithm \({\mathcal {V}}\) in \(\mathsf {VC- CR}\), and correspondingly relate the voter algorithm \({\mathcal {V}}\) in \(\mathsf {VC- RF}\) to \({\mathcal {V}}_\textsf{Instr}\) in \(\mathsf {VC- CR}\). We need the timed commitment oracle since in the coercion game, the adversary gives instructions before voting. However, in the vote-seller experiment the output message is produced after voting which could allow the vote-seller to choose a favourable coercion instruction after voting which fits with the view of the voter interaction.

We also conjecture that \(\mathsf {VC- RF}\) implies \(\mathsf {VC- CR}(\mathsf {Known- Vote})\) with access to the trusted hardware module \(\mathcal {O}_\textsf{TPM}\), since the vote seller can commit to a coercion instruction, let the module output the coercer instructions and in the end output all secrets to the distinguisher including an attestation of what it was running.

In Fig. 2 we present some of the relations between the security definitions.

Fig. 2.
figure 2

Implications between the new security definitions

5 Conclusion and Future Work

We studied previously unexplored coercion and vote-selling attacks based on new cryptographic primitives such as blockchains, delay functions, time-lock encryption, etc. Our investigation showed many examples of how the coercer can force voters to comply with his demands by relying on those new tools. We described some of the possible attacks and sketched others. Since some successful coercion attacks occur on voting schemes that were supposed/claimed/proved to be coercion resistance or receipt-free, the main conclusion of the first part of the work was that the coercion models should be re-evaluated, and new definitions were required. Such definitions, that are presented in Sect. 4, lead to some interesting lines of future work. For example, it would be interesting to prove that no scheme can be coercion-resistant if the coercer uses \(\textsf{Token}\), or that a trusted \(\mathcal {V}\mathcal {D}\) is essential in our model, or that no voter interacting with the coercer \(\mathcal {C}\) directly can enjoy CAI and coercion-resistant at the same time.