Keywords

1 Introduction

A key challenge in electronic voting is to provide verifiability and coercion resistance at the same time. The goal of end-to-end verifiable (E2E V) electronic voting systems is to provide evidence to voters that their vote is correctly counted while giving them the ability to deny their real vote in the presence of a coercer. In addition, for many elections, it is critical to maintain privacy of voters in the future. Considering the pace of advances in computational power and quantum computing, it is necessary to protect privacy against future adversaries with more powerful computational resources. The term everlasting privacy was coined by Moran and Naor [14] to address this concern by guaranteeing privacy against computationally unbounded adversaries. A more realistic version of everlasting privacy, called practical everlasting privacy was then introduced in [1], limiting the amount of the information a future adversary can access, i.e. to information posted on the bulletin board to enable the verification of the tally.

In this paper, we propose a protocol that simultaneously provides verifiability, everlasting privacy, and coercion mitigation against a computationally unbounded coercer. The protocol is based on Hyperion [8], which provides highly transparent verifiability and coercion mitigation against a limited coercer but does not offer everlasting privacy. Furthermore, we increase the coercer’s power to interact with the voter at anytime even before the beginning of casting phase to ensure higher levels of coercion resistance.

The main changes applied to Hyperion is the use of a perfectly private audit trail using perfectly hiding commitments [7], while making sure the Hyperion terms are likewise perfectly hiding, and the use of re-randomisation of the published terms to achieve receipt-freeness, with the Hyperion style verification ensuring verifiability of the election result.

Related Works Recently, an extensive survey by Haines et al. [13] on e-voting systems with everlasting privacy (EP), identified designing a system which satisfies both EP and receipt-freeness without the use of anonymous submission channels as an open challenge. This is important since according to Haines et al. the approaches that achieve EP based on privacy-preserving techniques are superior to the ones based on anonymous channels. In [9] they solve this problem by designing the first universally verifiable e-voting system with EP and receipt-freeness using the perfectly private audit trail (PPAT) technique in [7]. According to [13], the PPAT approach is a reasonable solution for achieving EP. In this work we also use the superior approach by employing the PPAT technique not only to achieve EP but also everlasting receipt-freeness.

Structure of the Paper In the next section, we introduce the preliminaries to provide a better understanding of our scheme. We describe the voter’s perspective of our proposed e-voting system in Sect. 3 and the protocol in detail in Sect. 4. In Sect. 5, we provide proof of the security properties.

2 Preliminaries

This section begins by presenting the notation utilized throughout the paper, followed by an overview of the parties engaged in the protocol, and concludes with an explanation of the cryptographic primitives employed in the voting protocol.

Notation We use bold letters to denote vectors, e.g., \(\textbf{e,E}\), and sans-serif letters to denote algorithms, e.g., \(\textsf{Shuffle}\). The bold letter \(\textbf{G}\) denotes a cyclic group of prime order q, such as \(\textbf{G}_1\), \(\textbf{G}_2\), and \(\textbf{G}_T\). Group generators are denoted as \(g \in \textbf{G}_1\) and \(h \in \textbf{G}_2\). Public and secret keys are denoted pk and sk, and may include an index to indicate the key holder. The set of all permutations of size n is denoted by \(\mathcal {S}_n\), with a specific permutation is represented by \(\pi \) and its inverse by \(\pi ^{-1}\). We denote a zero-knowledge proof by \(\Pi \), a Commitment Consistent proof by \(\textsf{P}\) (see Sect. 2), and the voter’s signature by \(\sigma \).

Parties The voting protocol is run between the following parties.

Voters (\(\textsf{V}\))::

n voters \(\textsf{V}_1,\textsf{V}_2,\ldots ,\textsf{V}_n\) with identities \(\textsf{ID}_1,\textsf{ID}_2,\ldots ,\textsf{ID}_n\).

Election Authority (\(\textsf{EA}\))::

responsible for generating the election parameters.

Talliers (\(\textsf{T}\))::

k talliers \(\textsf{T}_1,\textsf{T}_2,\ldots ,\textsf{T}_k\) who threshold share the secret key and cooperate to decrypt the ciphertexts.

Mix-servers (\(\textsf{M}\))::

N mix servers \(\textsf{M}_1,\textsf{M}_2,\ldots ,\textsf{M}_N\) responsible for the mixing part of the tally phase.

Bulletin Board (\(\textsf{PBB}\) and \(\textsf{SBB}\))::

similar to [7], we assume two secure broadcast channels: \(\textsf{PBB}\) is the public board for all participants and \(\textsf{SBB}\) is the secret board shared with the \(\textsf{T}\) and \(\textsf{M}\). All designated participants share the same view of each board.

Election Admin(\(\textsf{Adm}\))::

responsible for checking the validity of submitted ciphertexts by \(\textsf{V}\), and relaying information between \(\textsf{V}\) and \(\textsf{SBB}\).

Cryptographic Primitives In the electronic voting protocol proposed in Sect. 4, we use the following cryptographic primitives.

Signature. Any secure public key signature scheme is suitable for our needs. We use \(\textsf{Sign}_{sk}(m)\) to demonstrate the signature on message m using the secret signing key sk and verification of signature \(\sigma _m\) with verification key vk as \(\textsf{Verify}_{vk}(\sigma _m)\).

Elliptic Curve. Similar to [7], a bilinear map in SXDHFootnote 1 setting is denoted by \(e:\textbf{G}_1\times \textbf{G}_2\rightarrow \textbf{G}_T\) where there are no efficiently computable homomorphisms between \(\textbf{G}_1\) and \(\textbf{G}_2\). In this setting, the two following problems are computationally intractable in both groups \(\textbf{G}_1,\textbf{G}_2\). The security of our protocol relies on this assumption.

  • Decisional Diffie-Hellman (DDH): given \(g^{x_1}\) and \(g^{x_2}\) for uniformly indepdently chosen \(x_1,x_2\in \mathbb {Z}_q\), \(g^{x_1x_2}\) is indistinguishable from a random value in \(\textbf{G}\).

  • Computational 1-Diffie-Hellman Inversion (1-DHI): given \(g^x\in \textbf{G}\) with \(x\in \mathbb {Z}_q\), it is intractable to compute \(g^{1/x}.\)

Threshold ElGamal Encryption (TEG). We use the IND-CPA threshold ElGamal encryption system in [6, 11] to encrypt a message m in group \(\textbf{G}_1\) with generator g. Let sk be the encryption secret key and \(pk:=g^{sk}\) the public key. Using the \((t,k)-\)threshold Shamir secret sharing (\(\textsf{SSS}\)) scheme, we split sk into k shares: \(\textsf{SSS}(sk)=(sk_i)_{i=1}^{k}\), distributing them among the shareholders \(T_i\). Each \(T_i\) then computes their verification key \(vk_i:=g^{sk_i}\). The \(\textsf{Enc}\) and \(\textsf{Dec}\) algorithms work as follows:

  • \(\textsf{Enc}_{pk}(m;r)=(m\cdot pk^r,g^r)\), with \(r\in \mathbb {Z}_q\).

  • To decrypt a ciphertext \((a,b)\in \textbf{G}^2_1\), each \(\textsf{T}_i\) takes the following steps: 1) Compute \(b_i:=b^{sk_i}\). 2) Use \(vk_i\) to prove \(\log _{g}^{vk_i} = \log _{b}^{b_i}.\) 3) Without loss of generality assume the \(\log \) equality holds for any i in \(S_t:=\{1,\ldots ,t\}\). 4) Compute Lagrange coefficients in Lagrange interpolation \(sk = \sum _{i=1}^{t}sk_i\cdot \lambda _{0,i}^{S_t}\) as \(\lambda _{0,i}^{S_t}=\prod _{j\in S_t\backslash \{i\}}\frac{j}{j-i}\). 5) Compute \(\prod _{i\in S_t}b_i^{\lambda _{0,i}^{S_t}}=\prod _{i\in S_t}b^{sk_i\cdot \lambda _{0,i}^{S_t}}=b^{sk}\). 6) Derive \(\textsf{Dec}_{sk}(a,b)=a/b^{sk} \text { mod}\ q\).

Commitment Consistent Encryption (CCE). We use the Perfectly Private Audit Trail for Complex ballots (PPATC), which is the mix-net version of CCE proposed in [7]. Let \(m\in \textbf{G}_1\) be a message and \(pk_T\) be the talliers’ joint public key. Given a ciphertext \(e=\textsf{Enc}_{pk_T}(m)\), the goal is to derive a commitment \(c=\textsf{Com}(m;r)\) with a randomness \(r\in \mathbb {Z}_q\) to the same encrypted message m. As explained in [7], for simplicity the opening value of the commitment is considered to be r instead of (mr). To clarify, consider the talliers hold the following keys: \(sk_T=(x_1,x_2)\in \mathbb {Z}_q^2\) and \(pk_T=(g_1:=g^{x_1},g_2:=g^{x_2})\in \textbf{G}_1^2\). The public set of parameters is \(prm:=\{q,\textbf{G}_1,\textbf{G}_2,g,h,h_1\}\) where g and h are the generators of \(\textbf{G}_1\) and \(\textbf{G}_2\) respectively and \(h_1\in \textbf{G}_2\). Let \({\textbf{r}}=(r,r_1,r_2)\in _R\mathbb {Z}_q^3\) be the randomness. The CCE ciphertext, shown by \(\textsf{CCE}(prm,m;\textbf{r})\) is a tuple \(({\textbf{e}}, {\textbf{c}})=(e_1,e_2,e_3,c_1,c_2)\) where \(e_1:=g^{r_1},\ e_2:=g^{r_2},\ e_3:=g_1^{r}g_2^{r_2}\) and \(c_1:=h^rh_1^{r_1},\ c_2:=mg_1^{r_1}\). In fact, \(\textbf{e}\) is the TEG encryption of the opening value \(\textbf{r}\), and \(\textbf{c}\) is the desired commitment to the message m. Throughout this paper, for the sake of simplicity, we sometimes use \(\textbf{cce}\) instead of \(({\textbf{e}}, {\textbf{c}})\).

In our protocol, we use three more algorithms from [7] for the CCE ciphertext: \({\text {Dec}}\), \(\textsf{Open}\), and \(\textsf{Verify}\). These algorithms work as follows: \({\text {Dec}}_{sk_T}(\textbf{cce}) :=c_2/{e_1}^{x_1}\), \(\textsf{Open}(sk_T, \textbf{cce}):=e_3/e_2^{x_2}\), \(\textsf{Verify}(pk_T, c_1, c_2, m, o):= 1\ \text {if} \ e(g_1,c_1)=e(o,h)e(c_2/m,h_1)\) and 0 otherwise, with o being the opening value of \(\textbf{cce}\).

Sigma Protocol. We use the validity proof of CCE in [7] but in an interactive manner: here, the verifier generates the challenge ch randomly. More precisely, let \(({\textbf{e}}, {\textbf{c}})=(e_1,e_2,e_3,c_1,c_2)\). We build a Sigma (\(\Sigma \)) protocol with the transcript \((\textbf{a},\text {ch},\textbf{z})\) for the relation \(\mathcal {R}=\{((prm,(\textbf{e},\textbf{c})),(m,\textbf{r})): \textsf{CCE}(prm,m;\textbf{r})=(\textbf{e},\textbf{c})\}\) as follows: the prover generates random values \(s,s_1,s_2\in _R\mathbb {Z}_q\) and computes \((e_1^\prime ,e_2^\prime ,e_3^\prime ,c_1^\prime ):=(g^{s_1},g^{s_2},g_1^{s}g_2^{s_2},h^sh_1^{s_1})\). Then, she sends \((e_1^\prime ,e_2^\prime ,e_3^\prime ,c_1^\prime )\) as \(\textbf{a}\) to the verifier. The verifier generates and sends a uniformly random challenge ch to the prover. The prover computes the response \(\textbf{z}=(z,z_1,z_2)\) with \(z:=s+ch\cdot r,\ z_1:=s_1+ch\cdot r_1,\ z_2:=s_2+ch\cdot r_2\) and sends \(\textbf{z}\) to the verifier. The verifier computes \(e_1^{''}:=\frac{g^{z_1}}{{e_1}^{ch}},\ e_2^{''}:=\frac{g^{z_2}}{{e_2}^{ch}},\ e_3^{''}:=\frac{{g_1}^{z}{g_2}^{z_2}}{{e_3}^{ch}},\ c_1^{''}:=\frac{{h}^{z}{h_1}^{z_1}}{{c_1}^{ch}}\). If all equalities \(e_1^{''}=e_1^\prime ,\ e_2^{''}=e_2^\prime ,\ e_3^{''}=e_3^\prime \) and \(c_1^{''}=c_1^\prime \) hold, she returns 1; otherwise, 0.

In [7] this is transformed into a non-interactive proof using the Fiat-Shamir transformation. In our case, it is important that the proof is fresh, especially we don’t want a coercer to forward a ciphertext with an unknown vote to the voter and ask her to submit this. To avoid this, we can use the non-interactive proof mentioned above which will be extractable in the plaintext message. Alternatively, one can make a two-move protocol, where the authorities first send a challenge to the voter, and the voter includes this challenge in the hash used for the Fiat-Shamir transformation.

Non-interactive Zero Knowledge Proof of Knowledge (NIZKPoK). To prove knowledge of a secret s, we use Schnorr proof [15] and combine it with (strong) Fiat-Shamir transformation [3] to achieve a non-interactive proof system. We then use the notation NIZKPoK(s) for the combination.

Re-randomization (ReRand). We use this primitive to re-randomize a CCE ciphertext, a TEG ciphertext and a public key of the form \(g^x\). For \(\textbf{cce}=\textsf{CCE}(prm,m;\textbf{r})\) with \(\textbf{r}\in _R\mathbb {Z}_q^3\), we multiply the ciphertext by encryption of the unity element \(\textrm{1}\in \textbf{G}_1\) with a random. For \(\textbf{E}={\text {Enc}}_{pk}(m;r)=(m\cdot pk^r,g^r)\) with \(r\in \mathbb {Z}_q\), we exponentiate the ciphertext by a first random value and re-randomize it by a second random value. Lastly, for the public key we exponentiate it by a randomness:

  • \(\textsf{ReRand}(\textbf{cce},\textbf{r}^\prime ):=\textbf{cce}\cdot \textsf{CCE}(prm,\textrm{1};\mathbf {r^\prime })=\textsf{CCE}(prm,m;\textbf{r}^{\prime \prime })\) with \(\textbf{r}^\prime \in _R\mathbb {Z}_q^3\) and \(\textbf{r}^{\prime \prime }=\textbf{r}+\textbf{r}^\prime \).

  • \(\textsf{ReRand}(\textbf{E},s,r^\prime ):=\textsf{ReRand}(\textsf{Exp}(\textbf{E},s),r^\prime )={\text {Enc}}_{pk}(m^s;rs+r^\prime )\) with \(\textsf{Exp}(\textbf{E},s):=((m\cdot pk^r)^s,(g^r)^s)\) and \(s,r^\prime \in \mathbb {Z}_q\).

  • \(\textsf{ReRand}(pk,s): = (g^x)^{s}\) for \(s\in \mathbb {Z}_q\).

For the sake of brevity, we write \(\textsf{ReRand}\{(\textbf{cce},\textbf{r}^\prime ),(\textbf{E},s,r^\prime ),(pk,s)\}\) instead of \(\{\textsf{ReRand}(\textbf{cce},\textbf{r}^\prime ),\textsf{ReRand}(\textbf{E},s,r^\prime ),\textsf{ReRand}(pk,s)\}\). Moreover, we omit the randomness when it is not needed for further computations.

Shuffle. Each mix server \(\textsf{M}_j\), \(j\in \{1,\cdots ,N\}\), uses the \(\textsf{Shuffle}\) algorithm to permute and re-randomize the CCE ciphertexts together with the ElGamal encryption \(\textbf{E}\) of a verification term, and the voters’ public keys of the form \(g^x\) for a secret x. More precisely, assume we have a list of size n as \(\{(\textbf{e}_i,\textbf{c}_i),\textbf{E}_i,pk_i\}_{i\in \{1,\ldots ,n\}}\) before mixing, where \((\textbf{e}_i,\textbf{c}_i):=\textsf{CCE}(prm,m_i;\textbf{r}_i)\) and \(\textbf{E}_i:={\text {Enc}}_{pk_T}(g;0)=(g,1)\) which is the same value for each \(\textsf{V}_i\) in the beginning. The mix server \(\textsf{M}_j\) generates a random permutation \(\pi ^j\in \mathcal {S}_n\), and for each voter \(\textsf{V}_i\) generates a random vector \(\textbf{r}^j_i\in \mathbb {Z}_q^3\), and two random values \(r^j_i,s^j_i\in \mathbb {Z}_q\). Assume \(\{(\textbf{e}^{j-1}_i,\textbf{c}^{j-1}_i),\textbf{E}^{j-1}_i,pk^{j-1}_i\}_i\) is the output of \(\textsf{M}_{j-1}\).Footnote 2 Then, \(\textsf{M}_j\) runs the \(\textsf{Shuffle}\) algorithm as follows.

$$\begin{aligned} \textsf{Shuffle}\left( \{(\textbf{e}^{j-1}_i,\textbf{c}^{j-1}_i),\textbf{E}^{j-1}_i,pk^{j-1}_i\}_i,\{\textbf{r}^j_i,s^j_i,r^j_i\}_i,\pi ^j\right) := \\ \{\textsf{ReRand}\{((\textbf{e}^{j-1}_{\pi ^j(i)},\textbf{c}^{j-1}_{\pi ^j(i)}),\textbf{r}^j_{\pi ^j(i)}), (\textbf{E}^{j-1}_{\pi ^j(i)},s^{j-1}_{\pi ^j(i)},r^{j-1}_{\pi ^j(i)}),(pk^{j-1}_{\pi ^j(i)},s^{j-1}_{\pi ^j(i)})\}\}_i \end{aligned}$$

This will be the output of \(\textsf{M}_j\) that we show by \(\{(\textbf{e}^{j}_i,\textbf{c}^{j}_i),\textbf{E}^{j}_i,pk^{j}_i\}_i\). Let’s define \(\textbf{cce}^j:=(\textbf{e}^j,\textbf{c}^j)\). Similar to [7], \(\textsf{M}_j\) computes two commitment consistent proofs of shuffle with respect to \(\pi ^j\): \(\textrm{P}^{j}_{\textbf{cce}}\) and \(\textrm{P}^{j}_{\textbf{c}}\). The first proof demonstrates that \(\textbf{cce}^j\) is a shuffle of \(\textbf{cce}^{j-1}\) and the second proof demonstrates that \(\textbf{c}^j\) is a shuffle of \(\textbf{c}^{j-1}\). In our case, \(\textsf{M}_j\) extends this proof to also show that first, \(\textbf{E}^j\) and \(pk^j\) are shuffled in parallel with the same permutation \(\pi ^j\), second \(\textbf{E}^j\) is exponentiated with the same randomness \(s^j\) as in \(pk^j\)Footnote 3, and lastly, proof of knowledge of \(s^j\) s.t. \(\textsf{ReRand}(pk,s^j)=pk^j\). We show the extended proofs by \(\textrm{P}^{j}_{\textbf{cce},\textbf{E},pk}\) and \(\textrm{P}^{j}_{\textbf{c},\textbf{E},pk}\).

In [12] the authors present an efficient mixnet for the PPATC scheme including a machine-verified proof of the protocol. The current mixnet is a straightforward extension, in particular each mixnode commits to their permutation which can be reused for the extra terms being mixed in parallel. Note that we deliberately kept the public key of the voter and the encryption \(\textbf{E}\) in the same group.

3 The Voter Experience

In this section, we describe the voter’s view of our protocol, firstly in the absence of coercion. Then we describe the actions required in the presence of a coercer. We assume that each voter has a pair of signing keys \((vk_i,sk_i)\) and that each voter’s device generates an ephemeral trapdoor key pair when they join.

3.1 The Base Protocol

  • The voter inputs her vote and her device generates the commitment to this vote and signs the result.

  • The voter’s device signs her public trapdoor key and sends it to the Election Admin along with the signed commitment on the vote and voter’s ID.

The voter’s commitment will be re-randomized and published on \(\textsf{PBB}\) along with her trapdoor public key and ID. Subsequently, information on the \(\textsf{PBB}\) will be shuffled and re-randomized with each step being displayed on the \(\textsf{PBB}\). Finally, the voter will be invited to check \(\textsf{PBB}\).

  • After a certain period, the voter will receive a notification term denoted \(\alpha \) which she inputs to her device to extract her unique tracking number. She will then use this tracking number to verify that her vote appears correctly in the tally on the \(\textsf{PBB}\).

3.2 The Protocol in the Event of Coercion

In the event of coercion, the voter needs to take some additional steps to evade the coercer. The vote casting steps are identical.

  • Once the tally to published the voter visits \(\textsf{PBB}\) and finds an alternative tracking number appearing against the coercer’s required vote and saves that in her device.

  • Voter’s device computes a fake \(\alpha \) term (which raised to the power of voter’s secret trapdoor key and results in the alternative tracking). This step requires the voter’s secret trapdoor key.

  • After a certain period, the voter receives a notification containing the real \(\alpha \) term, inputs it into her device to extract her unique tracking number, and uses this tracking number to verify her vote on \(\textsf{PBB}\).

  • In case the coercer asks for the tracking number or \(\alpha \) term, she can provide him with the fake ones that were computed by her device.

4 Details of Our Scheme

This section outlines the various stages of our e-voting protocol, encompassing setup, submission, tally, and verification phases. The parties involved and the cryptographic primitives used in these phases are explained in detail in Sects. 2 and 2 respectively.

4.1 Setup Phase

During this phase, the \(\textsf{EA}\) generates the election parameters prm, determining the voting method, defining the set of candidates, and providing any other necessary information. This also includes encryption of the public value g under talliers public key \(pk_T\) as \(E_g:={\text {Enc}}_{pk_T}(g;0)=(g,1)\). This value is the same in the beginning for all voters \(\textsf{V}_i\) but will change later. However, we refer to it as \(E_i\) from the beginning. \(\textsf{EA}\) creates a vector \(\textbf{E}=\{E_i\}_i\) and sends \(\textbf{E}\) to \(\textsf{SBB}\). We also assume that each voter \(\textsf{V}_i\) has a signing key pair \((vk_i,sk_i)\).

4.2 Submission Phase

This phase consists of two parts. In the first part, each voter \(\textsf{V}_i\) prepares their ballot and sends it privately to the \(\textsf{Adm}\). In the second part, the \(\textsf{Adm}\) is responsible for checking the validity of the submitted CCE by \(\textsf{V}_i\).

Ballot preparation

  1. 1.

    Similar to Hyperion [8], \(V_i\) generates an ephemeral trapdoor key \(x_i\in \mathbb {Z}_q\) and compute the public trapdoor key \({pk}_i:={g}^{x_i}\) using her device. Next, \(\textsf{V}_i\) signs \({pk}_i\) as \(\sigma _{i1}:=\textsf{Sign}_{sk_i}({pk}_i)\) and computes the proof \(\Pi _i:=\)NIZKPoK(\(x_i\)).

    \(\Pi _i\) should be non-malleable and bound to the identity of the voter.

  2. 2.

    \(V_i\) commits to her vote \(v_i\) using the CCE scheme explained in the Sect. 2 by computing the ciphertext \(\textbf{cce}_i:=(\textbf{e}_i,\textbf{c}_i)=\textsf{CCE}(prm,v_i;\textbf{r}_i)\) with \(\textbf{r}_i\in _R\mathbb {Z}_q^3\). Then, she signs it as \(\sigma _{i2}:= \textsf{Sign}_{sk_i}(\textbf{cce}_i)\).

Now, the voter \(\textsf{V}_i\) sends \(\{\textsf{ID}_i,\ {pk}_i,\sigma _{i1},\ \Pi _i,\ \textbf{cce}_i,\ \sigma _{i2}\}\) privately to the \(\textsf{Adm}\).

4.3 Validity Check

The voter \(\textsf{V}_i\) as the prover and the \(\textsf{Adm}\) as the verifier run the Sigma protocol \(\Sigma _i\) for the \(\textsf{Adm}\) to check whether \(\textbf{cce}_i\) is a valid CCE or not. If it is, then the \(\textsf{Adm}\) proceeds by re-randomizing it as \(\textbf{cce}_i^\prime :=(\textbf{e}_i^\prime ,\textbf{c}_i^\prime )=\textsf{ReRand}(\textbf{e}_i,\textbf{c}_i).\) Finally, \(\textsf{Adm}\) sends \(\textbf{cce}_i\) and \(\textbf{cce}_i^\prime \) together with a proof of correct re-randomization (\(\Pi _i^*\)) to \(\textsf{SBB}\) and publishes \(\{\textsf{ID}_i,{pk}_i,\sigma _{i1},\textbf{c}_i^\prime \}\) on \(\textsf{PBB}\).

4.4 Tally Phase

In this phase, first the mix servers have to run the \(\textsf{Shuffle}\) algorithm one by one and publish the commitments on \(\textsf{PBB}\) and the encryption of the opening values on \(\textsf{SBB}\). Then, the mix-servers have to run the mix-net backwards on the term \(E^N\), output by \(\textsf{M}_N\), to get the correct term for the voter \(\textsf{V}_i\). Finally, the talliers have to decrypt the output of the reverse mixing and send it to the \(\textsf{Adm}\). \(\textsf{Adm}\) sends this back to the voter. More precisely, the tally phase works as follows.

Mixing

First, \(\textsf{M}_1\) generates a random permutation \(\pi ^1\in \mathcal {S}_n\), and for each \(\textsf{V}_i\) generates a random vector \(\textbf{r}^1_i\in \mathbb {Z}_q^3\), and two random values \(r^1_i,s^1_i\in \mathbb {Z}_q\) and runs the shuffle algorithm as \(\textsf{Shuffle}\left( \{(\textbf{e}^\prime _i,\textbf{c}^\prime _i),E_i,pk_i\}_i,\{\textbf{r}^1_i,s^1_j,r^1_i\},\pi ^j\right) \). Let’s call the output \(\{(\textbf{e}^1_i,\textbf{c}^1_i),E^1_i,pk^1_i\}_i\). As explained in Sect. 2, \(\textsf{M}_1\) generates \(\textrm{P}^{1}_{\textbf{cce},E,pk}\) and \(\textrm{P}^{1}_{\textbf{c},E,pk}\). Then, she sends \(\{\textbf{c}^1_i,pk^1_i\}_i\) and \(\textrm{P}^{1}_{\textbf{c},E,pk}\) to \(\textsf{PBB}\) and sends \(\{\textbf{e}^1_i,E^1_i\}_i\) and \(\textrm{P}^{1}_{\textbf{cce},E,pk}\) to \(\textsf{SBB}\). The output of \(\textsf{M}_1\) is the input of the next mix server \(\textsf{M}_2\) and so on. Lastly, the final output of the mix-net that \(\textsf{M}_N\) generates is \(\{(\textbf{e}^N_i,\textbf{c}^N_i),E^N_i,pk^N_i\}_i\). Similar to \(\textsf{M}_1\), \(\textsf{M}_N\) sends \(\{\textbf{c}^N_i,pk^N_i\}_i\) and \(\textrm{P}^{N}_{\textbf{c},E,pk}\) to \(\textsf{PBB}\) and sends \(\{\textbf{e}^N_i,E^N_i\}_i\) and \(\textrm{P}^{N}_{\textbf{cce},E,pk}\) to \(\textsf{SBB}\).

Reverse Mixing. Define \(s^{total}:=s^1_{\pi ^1\pi ^2\cdots \pi ^N(i)}s^2_{\pi ^2\cdots \pi ^N(i)}\cdots s^N_{\pi ^N(i)}\)Footnote 4. The output \(E^N_i\) by \(\textsf{M}_N\) is indeed \({\text {Enc}}_{pk_T}(g^{s^{total}})\). The value \(g^{s^{total}}\) does not belong to \(\textsf{V}_i\), but to \(\textsf{V}_{\pi ^1\pi ^2\cdots \pi ^N(i)}\). Hence, the mixnet servers have to run the mixnet backward: \(\textsf{M}_N\) uses \((\pi ^N)^{-1}\) to shuffle and a fresh randomness \(r^{\prime N}_i\) to re-randomize (without exponentiation) \(E^N_i\). \(\textsf{M}_N\) sends the result \(E^{\prime N}_i\) with \(\Pi ^N:=\)NIZKPoK\(((\pi ^N)^{-1},r^{\prime N}_i)\) to \(\textsf{SBB}\). \(\textsf{M}_{N-1}\) proceeds with \(E^{\prime N}_i\) with the inverse permutation \((\pi ^{N-1})^{-1}\). \(E^{\prime 1}_i\) which is the final value and the output of \(\textsf{M}_1\) will be posted on \(\textsf{SBB}\).

Decryption of Opening and Verification Terms.

  • Talliers using \(sk_T\) decrypt each ciphertext \(\textbf{cce}^N_i =(\textbf{e}^N_i,\textbf{c}^N_i)\) as \(vote_i:={\text {Dec}}_{sk_T}(\textbf{cce}^N_i)\). Then, they run the \(\textsf{Open}\) algorithm to obtain \(o_i:=\textsf{Open}(sk_T,\textbf{cce}^N_i)\). They publish \(vote_i,o_i\) on \(\textsf{PBB}\) together with a proof of correct decryption \(\Pi _T\).

  • Talliers using their secret keys compute their decryption shares of the TEG encryption \(E^{\prime 1}_i\) and send it to the \(\textsf{Adm}\) with a proof of correct decryption \(\Pi ^*_T\). Then, \(\textsf{Adm}\) sends the decryption result as \(\alpha _i\) to the voter \(\textsf{V}_i\).

4.5 Verification Phase

The voter \(\textsf{V}_i\) raises \(\alpha _i\) to the power of her secret \(x_i\). Then, \(\textsf{V}_i\) can verify her vote by finding the row which contains \(\alpha _i^{x_i}\) as the shuffled public key, i.e. \(pk^{N}_i\).

5 Analysis

In our e-voting protocol, the vote of \(\textsf{V}_i\) lies in \(\textbf{G}_1\), which means it handles complex ballots in contrast to the homomorphic tally approaches. However, here, despite using a mix tally, we require \(\textsf{V}_i\) to run an interactive \(\Sigma -\)protocol with the \(\textsf{Adm}\) to prove \(v_i\) lies in the correct space of candidates in case that the number of candidates does not match size of \(\textbf{G}_1\) which is q. This provides accountability in the submission phase in case of a dispute between \(\textsf{V}_i\) and \(\textsf{Adm}\).

5.1 Verifiability

The everlasting Hyperion scheme’s verifiability is similar to Selene’s as proven in [10]. Universal verifiability holds due to the soundness of the NIZKs for the mixnet and the DDH assumption for the binding property of the commitment opening. The EUF-CMA signatures from voters prevent ballot stuffing.

The soundness of the individual verifiability follows from the 1-DHI assumption using the extractability of the exponents \(x_i\) in the submitted public keys and of \(s_i\) in the final terms \(\textsf{pk}_i^{s_i}=g^{x_is_i}\) next to the plaintext votes. The reason is that if \(\textsf{Adm}\) could send an \(\alpha \) to \(\textsf{V}_i\) such that \((\alpha )^{x_i}=\textsf{pk}_j^{s_j}=g^{x_js_j}\) for some j, it would imply \(g^{1/x_i}=\alpha ^{1/(s_jx_j)}\).

5.2 Ballot Privacy

In this section we prove ballot privacy in a strong adversary model where the adversary can control the submitted vote ballots and the tally procedure, i.e. against a malicious board. We use the definition \(\mathsf {du\text {--}mb\text {--}BPRIV}\) from [10] which is a version of the \(\mathsf {mb\text {--}BPRIV}\) definition from [5] which allows late verification after the tally, as we have in Hyperion (see details in [10]). This definition is in terms of an experiment \(\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {du\text {--}mb\text {--}BPRIV}, \textsf{Recover}, \beta }(\mathrm {\lambda })\) detailed in Fig. 1 , where the adversary wins if it correctly guesses whether it sees a real or a simulated world (\(\beta =0,1\)). The adversary has access to the ballots cast by honest voters using the \(\mathcal {O}\textsf{board}\) where \(\textsf{publish}\) in our case simply gives both the private and public board input to the adversary.

In our case the adversary will be allowed to control both the public and private bulletin board (\(\textsf{BB}\) in the game), but we assume both will be honestly verified – the internal by the election authorities and the public by voters or any third party. This is captured by \(\textsf{ValidBoard}\) in Line 10 of the game, and will include checking all proofs on the submitted ballots (also if these are interactive).

The voters are divided into honest \(\textsf{H}\) and dishonest \(\textsf{D}\) voters, with honestly generated public credentials \(\textsf{PU}\) (Line 5-6) and corresponding secret credentials \(\textsf{U}\). The adversary will get access to the secret credentials of the dishonest voters (Line 7). This means that the adversary cannot modify the signing keys and the Hyperion keys of the voters when altering the boards, which would also immediately lead to attacks. In practice, this is enforced by the signatures in the protocol. Since the definition assumes honestly generated keys, this is not the strongest possible definition. We plan to improve on this in future work, but for now we use this peer-reviewed definition.

The adversary can also output the result and tally of the election, denoted by \((r^{*},\pi ^{*})\) in Line 13. This entails the full mixnet output and proofs. This is verified using \(\textsf{VerifyTally}\) in Line 14 which will verify all ZKPs. Further, the voters can make individual verification of this tally result using the Hyperion mechanism. In the definition \(\textsf{H}_{\textsf{check}}\) defines those (honest) voters who have to verify, \(\textsf{Checked}\) those who actually verified, and \(\textsf{Happy}\) will be those who verified successfully. The adversary decides who verifies using the \(\mathcal {O}\textsf{verify}\) oracle, but will be punished if not all voters in \(\textsf{H}_{\textsf{check}}\) verify correctly, especially, the adversary will then have to make a guess at breaking the privacy without seeing the tally (Lines 12,19). The data needed for verification is stored in \(\textsf{spsstate}\) which in the definition has to be split into a pre- and post-tally part, see [10]. In the case of our Hyperion variant, the tally can update the post state to give the verification term \(g^{r_i}\) to the voter.

The main point of the malicious board ballot privacy definitions [5, 10] is that they allow certain adversarial behaviour that obviously decreases privacy, but is seen as accepted or unavoidable behaviour, and then we require that there should be no further privacy leaks. In our case, we basically check that if we allow deletion and reordering of votes there will be no further leaks, e.g. no ballot copy attacks. This could, e.g., model an attacker that is able to block some ballots from reaching the authorities, which means less privacy for the remaining voters, but no further privacy attacks will be possible if the definitions are fulfilled. If we can somehow rule out the “allowed” adversarial behaviour, e.g. if the communication lines to the voters and the bulletin board itself are trusted, then we return to the standard ballot privacy definition. The allowed behaviour is defined by the recovery function \(\textsf{Recover}\) in Line 1 of the tally oracle \(\mathcal {O}\textsf{tally}\), see [10] for details. Basically, this will look for unaltered ballots from the vote oracle on the \(\beta =1\) board and change it to the ballots output to the \(\beta =0\) board. Finally, there has to be a simulator \(\textsf{Sim}\) to simulate all the outputs of the tally procedure (mixing and proofs) in order for the adversary not to be able to distinguish the two worlds.

Definition 1

\( \mathcal {V} \) satisfies \( \mathsf {du\text {--}mb\text {--}BPRIV}\) with respect to \( \textsf{Recover} \) if there exists an efficient simulator \( \textsf{Sim}\), such that for any efficient adversary \( \mathcal {A}\), the advantage \( \biggl |\text{ Pr }{\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {du\text {--}mb\text {--}BPRIV}, \textsf{Recover}, 0}(\mathrm {\lambda }) = 1} - \text{ Pr }{\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {du\text {--}mb\text {--}BPRIV}, \textsf{Recover}, 1}(\mathrm {\lambda }) = 1} \biggr |\) is negligible.

Theorem 1

Our everlasting version of Hyperion satisfies \(\mathsf {du\text {--}mb\text {--}BPRIV}\) assuming that the proofs are zero-knowledge, correct and sound and the SXDH assumption holds.

We here give a proof sketch.

Proof (sketch)

We will prove the theorem using a number of game hopes starting from \(\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {du\text {--}mb\text {--}BPRIV}, \textsf{Recover}, 0}(\mathrm {\lambda })\). Without loss of generality we can assume that the board output by the adversary validates (Line 10) and even further that the tally is verified correctly, the latter since it is always possible for the adversary to create a verifying tally (Line 14), the latter following from the perfect correctness of the encryption schemes and zero-knowledge proofs.

In the first hop we simulate all zero-knowledge proofs (which is possible since they are all verified by the argument above). This follows from the zero-knowledge property of the proofs.

Since the decryption proofs are now also simulated and thanks to the soundness of the mixnet (due to verification of the shuffle proofs), along with the perfect correctness of the ElGamal encryption schemes and of the openings of PPATC, the set of votes opened from ballots which are output from the vote oracle will match the set of input votes. For the ballots which have not been changed from the oracles, we can thus stop decrypting the opening and just output the opening output from the oracle and use the vote input to the oracle as the plaintext vote. This game hop does not change the advantage. The ballots output by the adversary are processed as normal in the tally.

In the third hop we use the DDH assumption to change all cryptographic groups elements in the mixnet to random elements, except the input elements and except the output elements not coming the vote oracle calls. This is 2\(n\cdot N\) uses of DDH coming from the ElGamal encryption of the \(\alpha \) terms and of the openings (per voter, per mix node) (all the remaining terms are uniformly random under the rerandomizations). We don’t need to change the Hyperion terms \(pk^i\) in this process since we exponentiate these during the randomization procedure and they are hence perfectly uniform. We can thus preserve the set of correctly verifying voters here and there will be no leaks to the adversary from this. Note also that the voters always verify according to the \(\beta =0\) plaintext votes, hence there is no leak on \(\beta \) from who verifies correctly, see also [10].

In the fourth hop we use that the PPATC scheme is NM-CPA secure (see [7]) and we change the \(\beta =0\) output PPATC ballots to the ones from \(\textsf{BB}_1\). Note that the adversary can track his own ballots using the Hyperion mechanism for the dishonest voters, but this is unproblematic for this game. Note that in the game, the voters always verify according to the \(\beta =0\) votes (Line 4 in the voting oracle) so this is still consistent. The tally result is still consistent with the tally result for \(\beta =0\) since the recovery function undoes exactly this change in the tally oracle for \(\beta =1\).

In the fifth and sixth hop we restore the mixnet using the DDH assumption, and we again do decryption of the vote oracle ballots.

In the final game we move back from simulated proofs to real proofs and we have reached \(\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {du\text {--}mb\text {--}BPRIV}, \textsf{Recover}, 0}(\mathrm {\lambda })\) as desired.    \(\square \)

We note that whereas the definition only has one trusted (tally) authority, our protocol contains zero-knowledge proofs to ensure that it is only secure if there is at least one honest tallier (or a threshold set of talliers) and one honest mix server.

Another technical note is that in the proof, we assumed that the adversary gets the verification terms from the Hyperion mechanism. Contrary to Selene [10] this cannot simply be given in the secret data of the voter, but we can allow the adversary to have states for all the dishonest voters which get updated at the tally time. Even simpler, we could allow the tally to also output the decryptions of \(E_i\) for all voters, and this would still be secure under the DDH assumption. We leave it for future work to capture privacy leaks from whether the verifications are successful or not, as this is not captured using the current definition [10].

Fig. 1.
figure 1

\(\mathsf {du\text {--}mb\text {--}BPRIV}\) ballot privacy against a dishonest ballot box from [10].

5.3 Everlasting Privacy

For everlasting privacy we do not consider an attacker controlling the board, but rather go for an updated version of the BPRIV [2] definition assuming secure and perfectly secret delivery of ballots. This is both because we would not be able to get privacy against an unbounded attacker acting at ballot casting time, since we relied on the (computational) soundness of the zero-knowledge proofs, but also because what we want protect against a future unbounded adversary, e.g. also if some of the crypto primitives should be broken e.g. using quantum computers. However, this doesn’t mean that the attacker is not present during the election, and hence our \(\mathsf {Everlasting\text {--}BPRIV}\) experiment in Fig. 2, \(\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {Everlasting\text {--}BPRIV}, \beta }(\mathrm {\lambda }) \), allows for the adversary to control some voters and getting their credentials (and their verifications as in the last section). Compared to \(\mathsf {du\text {--}mb\text {--}BPRIV}\) the adversary now only gets access the public parts of the ballots and the proof \(\Pi '\) in \(\mathcal {O}\textsf{tally}\) Line 2 will only contain the output to \(\textsf{PBB}\). With access to the internal board an unbounded adversary could easily break privacy.

The definition of \(\mathsf {Everlasting\text {--}BPRIV}\) is then as follows.

Definition 2

We say that \( \mathcal {V} \) satisfies \( \mathsf {Everlasting\text {--}BPRIV}\) if there exists an efficient simulator \( \textsf{Sim}\), such that for any unbounded adversary \(\mathcal {A}\) , the advantage \( \biggl |\text{ Pr }{\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {Everlasting\text {--}BPRIV}, 0}(\mathrm {\lambda }) = 1} - \text{ Pr }{\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {Everlasting\text {--}BPRIV}, 1}(\mathrm {\lambda }) = 1} \biggr |\) is zero (or bounded by some small probabiilty).

Theorem 2

Our everlasting version of Hyperion satisfies \(\mathsf {Everlasting\text {--}BPRIV}\) assuming the public proofs have perfect zero-knowledge.

Proof (Sketch)

We here only give a sketch of the proof. The proof is shorter than the proof of theorem 1 since the board and the tally are now honestly created. In the first hop, we simulate the proofs on \(\textsf{PBB}\). Since these are perfect zero-knowledge by assumption, there is no advantage for the adversary in this hop. Secondly, all rerandomization of the commitment terms and the Hyperion public keys are uniformly random and can be replaced with random values and hence the permutation of votes is information-theoretically hidden. Third, we can replace the first commitments with the commitments coming from the \(\beta =1\) voting oracle, still keeping the final commitments and the corresponding \(\beta =0\) tally. Since the commitments are uniformly random, the advantage in this hop is also zero. Especially, the ballots submitted by the cast oracle do not help the adversary (who could in principle decrypt them on its own using the unbounded computational power). We have now arrived at the \(\beta =1\) game.    \(\square \)

Fig. 2.
figure 2

\(\mathsf {Everlasting\text {--}BPRIV}\): Everlasting ballot privacy.

5.4 Everlasting Receipt-Freeness

We now consider receipt-freeness of voting protocols, specifically against computationally unbounded attackers. We write this in terms of the experiment \(\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {Everlasting\text {--}RF}, \beta }(\mathrm {\lambda })\), shown in Fig. 3. This experiment is close to \(\mathsf {Everlasting\text {--}BPRIV}\). The different is that the adversary has to point out two honest voters \(id_1,id_2\) and two vote choices \(v_a,v_b\). The voter \(id_1\) will vote for \(v_a\) for \(\beta =0\) and for \(v_b\) in \(\beta =1\), while \(id_2\) votes oppositely (Line 8-9). The latter ensures that the adversary cannot directly win the game by just looking at the result. Further, the adversary can ask for a receipt from \(id_1\) using the oracle \(\mathcal {O}\textsf{getReceipt}\). For \(\beta =0\) he gets the secret credential and the states of the voter. We will here assume he holds all possible random coins and data that the voter has access to. He also holds whatever material the authorities give to the voter, in our case the Hyperion term after mixing. For \(\beta =1\) this material is, however, manipulated using a faking algorithm \(\textsf{Fake}\). In our case this fakes the Hyperion term to point to a vote for \(v_b\). The definition is then

Definition 3

We say that \( \mathcal {V} \) satisfies \( \mathsf {Everlasting\text {--}RF}\) if there exists an efficient simulator \( \textsf{Sim}\) and faking algorithm \(\textsf{Fake}\), such that for any unbounded adversary \(\mathcal {A}\), the advantage \( \biggl |\text{ Pr }{\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {Everlasting\text {--}RF}, 0}(\mathrm {\lambda }) = 1} - \text{ Pr }{\textsf{Exp}_{\mathcal {A}, \mathcal {V}, \textsf{Sim}}^{\mathsf {Everlasting\text {--}RF}, 1}(\mathrm {\lambda }) = 1} \biggr |\) is zero (or bounded by some small probability).

We note that this is a weaker definition than e.g. [4] which allows the adversary against receipt-freeness to cast a ballot on behalf of the voter. However, in our case we have individual plaintext verification which can create new attacks in such a case. The definition from [4] can still be fulfilled by our scheme, if adapted to our setup, especially taking into account the proof of plaintext knowledge at vote casting. We leave the comparison as future work.

Theorem 3

Our everlasting version of Hyperion satisfies \(\mathsf {Everlasting\text {--}RF}\) assuming the public proofs have perfect zero-knowledge.

Proof (Sketch)

The proof follows as for \(\mathsf {Everlasting\text {--}BPRIV}\). The only point is that voter \(id_1\) might pick a vote from a dishonest voter revealing that she is faking and letting the adversary win the experiment. Thus the probability of winning will depend on how many dishonest voters voted for \(v_b\). The advantage of the adversary can thus be bounded by \(d/(d+1)\).

Fig. 3.
figure 3

\(\mathsf {Everlasting\text {--}RF}\): Everlasting Receipt-Freeness.