Abstract
The complete convergence results for weighted sums of widely orthant-dependent random variables are obtained. A strong law of large numbers for weighted sums of widely orthant-dependent random variables is also obtained. Our results extend and generalize some results of Chen and Sung (J. Inequal. Appl. 2018:121, 2018), Zhang et al. (J. Math. Inequal. 12:1063–1074, 2018), Chen and Sung (Stat. Probab. Lett. 154:108544, 2019), Lang et al. (Rev. Mat. Complut., 2020, https://doi.org/10.1007/s13163-020-00369-5), and Liang (Stat. Probab. Lett. 48:317–325, 2000).
Similar content being viewed by others
1 Introduction
Let \(\{X_{n}, n \ge 1\}\) be a sequence of random variables and let \(\{a_{nk}, 1 \le k\le n, n \ge 1\}\) be an array of constant. Since many linear statistics such as least-squares estimators, nonparametric regression function estimators and jackknife estimators are of the form of weighted sums \(\sum_{k=1}^{n} a_{nk}X_{k}\), it is important to study the limiting behavior of the weighted sums.
The complete convergence was introduced by Hsu and Robbins [10] as follows. A sequence \(\{X_{n}, n\ge 1\}\) of random variables converges completely to the constant θ if \(\sum_{n=1}^{\infty }P(|X_{n}-\theta |>\varepsilon )<\infty \) for all \(\varepsilon >0\). Note that the complete convergence implies almost sure convergence in view of the Borel–Cantelli lemma. The complete convergence is also used to characterize the rate of convergence.
In this paper, we will focus on the array weights \(\{a_{nk}, 1\le k\le n, n\ge 1\}\) of real numbers satisfying
for some \(\alpha >0\).
In fact, under condition (1.1), many authors have studied the strong laws of large numbers for weighted sums of independent and identically distributed random variables. For example, Chow [8] proved the Kolmogorov strong law of large numbers for weighted sums, and Cuzick [9] generalized Chow’s [8] result. Bai and Cheng [2] proved the Marcinkiewicz–Zygmund strong law of large numbers for weighted sums, and Chen and Gan [5] generalized the result of Bai and Cheng [2].
A convergence rate in the law of large numbers for weighted sums is also studied by many authors. Chen [4] established the following complete convergence:
for weighted sums of identically distributed negatively associated random variables satisfying (1.1), where \(r>1\), \(1\le p<2\), \(1/\alpha +1/\beta =1/p\), and \(\alpha < rp\). Note that if \(a_{nk}=1\) for \(1\le k\le n\) and \(n\ge 1\), then (1.2) reduces to the well-known Baum and Katz [3] strong law. Liang [12] established (1.2) for identically distributed negatively associated random variables with the weights of special type satisfying (1.1) (see also Remark 1.4 below). Chen and Sung [6], Sung [13], and Wu et al. [17] obtained (1.2) for \(\rho ^{*}\)-mixing random variables, Wang and Wang [16] and Wu et al. [19] established (1.2) for extended negatively dependent random variables, Wu et al. [18] established (1.2) for m-asymptotic negatively associated random variables, and Lang et al. [11] obtained (1.2) for widely orthant-dependent (WOD) random variables.
Recently, Chen and Sung [6] obtained a complete convergence result for weighted sums of \(\rho ^{*}\)-mixing random variables.
Theorem A
(Chen and Sung [6])
Let \(r\geq 1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\). Let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1) and let \(\{X,X_{n},n\geq 1\}\) be a sequence of identically distributed \(\rho ^{*}\)-mixing random variables. If \(EX=0\) and
then (1.2) holds. Conversely, if (1.2) holds for any array \(\{a_{nk}, 1\le k\le n,n\ge 1\}\) satisfying (1.1) for some \(\alpha >p\), then \(EX=0\), \(E|X|^{rp}<\infty \) and \(E|X|^{(r-1)\beta }<\infty \).
The case \(\alpha >rp\) with \(r>1\) in Theorem A is due to Sung [13]. When \(\alpha =rp\), the moment condition \(E|X|^{(r-1)\beta }\log (1+|X|)<\infty \) is a sufficient condition for (1.2). However, it is not known whether it is also a necessary condition for (1.2).
In this paper, we extend Theorem A to WOD random variables. The concept of WOD was introduced by Wang et al. [14] as follows.
Definition 1.1
Random variables \(X_{1}, X_{2}, \ldots \) are said to be widely upper orthant dependent (WUOD) if for each \(n\ge 1\), there exists a positive number \(g_{U}(n)\) such that for all real numbers \(x_{i}\), \(1\le i\le n\),
they are said to be widely lower orthant dependent (WLOD) if for each \(n\ge 1\), there exists a positive number \(g_{L}(n)\) such that, for all real numbers \(x_{i}\), \(1\le i\le n\),
and they are said to be WOD if they are both WUOD and WLOD.
In Definition 1.1, \(g_{U}(n)\), \(g_{L}(n)\), \(n\ge 1\), are called dominating coefficients. If for all \(n\ge 1\), \(g_{U}(n)=g_{L}(n)=M\) for some positive constant M, then \(\{X_{n}, n\ge 1\}\) are said to be extended negatively dependent (END). In particular, if \(M=1\), then \(\{X_{n}, n\ge 1\}\) are said to be negatively orthant dependent (NOD) or negatively dependent. Since the class of WOD random variables contains END random variables and NOD random variables as special cases, it is interesting to study the limiting behavior of WOD random variables.
We now state the main results. Some preliminary lemmas will be presented in Sect. 2. The proofs of the main results will be detailed in Sect. 3.
The first theorem extends the sufficiency of Theorem A with \(r>1\) to WOD random variables.
Theorem 1.1
Let \(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Let \(\{X,X_{n},n\geq 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\). Suppose that there exist a nondecreasing positive function \(g(x)\) on \([0, \infty )\) and a constant \(\tau \ge 0\) such that \(\max \{g_{L}(n), g_{U}(n)\} \le g(n)=O(n^{\tau })\). If (1.3) holds, then
Remark 1.1
When \(\alpha >rp\), Zhang et al. [20] proved a weaker complete convergence result than (1.4) under a stronger condition than (1.1). Hence Theorem 1.1 improves the result of Zhang et al. [20].
Remark 1.2
When \(p=1\) and \(\alpha >rp\), Lang et al. [11] proved Theorem 1.1 for the weights with \(\max_{1\le k\le n} |a_{nk}|=O(1)\) under stronger conditions on \(g(x)\). Note that, if \(\max_{1\le k\le n} |a_{nk}|=O(1)\), then (1.1) holds for any \(\alpha >0\). Hence Theorem 1.1 generalizes and improves the result of Lang et al. [11].
When \(r=1\), we have the following theorem.
Theorem 1.2
Let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1) for some \(\alpha >1\). Let \(\{X, X_{n}, n\ge 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\). Suppose that there exist a nondecreasing positive function \(g(x)\) on \([0, \infty )\) and a positive constant \(\tau <\min \{1, \alpha -1\}\) such that \(\max \{g_{L}(n), g_{U}(n)\}\le g(n)\) for \(n\ge 1\) and \(g(x)/x^{\tau }\downarrow \). If \(E|X|g(|X|)<\infty \), then
Remark 1.3
If \(a_{nk}=1\) for \(1\le k\le n\) and \(n\ge 1\), or \(\max_{1\le k\le n} |a_{nk}|=O(1)\) for \(n\ge 1\), then (1.1) holds for any \(\alpha >0\). These two cases are treated by Chen and Sung [7] and Lang et al. [11], respectively. Therefore, Theorem 1.2 generalizes the results of Chen and Sung [7] and Lang et al. [11].
The following corollary is a strong law of large numbers for weighted sums of WOD random variables.
Corollary 1.1
Let \(s>-1\) and let \(l(x)>0\) be a slowly varying function. Let \(\{X, X_{n}, n\ge 1\}\) and \(g(x)\) be as in Theorem 1.2. If \(E|X|g(|X|)<\infty \), then
Corollary 1.2
Let \(r \ge 1 \) and \(s>-1/r\), and let \(\{a_{nk}=c_{nk}k^{s}/n^{s}, 1\le k\le n, n\ge 1 \}\) be an array of constants, where \(|c_{nk}|\le B<\infty \) for all \(1\le k\le n\) and \(n\ge 1\). Let \(\{X,X_{n},n\geq 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\). Suppose that there exists a nondecreasing positive function \(g(x)\) on \([0,\infty )\) such that \(\max \{g_{L}(n), g_{U}(n)\} \le g(n)\). When \(r>1\), assume that \(g(n)=O(n^{\tau })\) for some \(\tau \ge 0\) and \(E|X|^{r}<\infty \). When \(r=1\), assume that \(g(x)/x^{\tau }\downarrow \) for some \(0<\tau <\min \{1, |1+1/s|\}\) (set \(\min \{1, |1+1/s|\}=1\) when \(s=0\)) and \(E|X|g(|X|)<\infty \). Then
Remark 1.4
Liang [12] proved Corollary 1.2 when \(r>1\) and \(\{X, X_{n}, n\ge 1\}\) is a sequence of identically distributed negatively associated random variables. Note that the proof of Liang [12] cannot be applied to the case \(r=1\) (the series on line 3 in page 322 of Liang [12] does not converge). Since negatively associated random variables imply WOD, Corollary 1.2 complements and extends Liang’s [12] result.
Throughout this paper, C always stands for a positive constant which may differ from one place to another. For events A and B, we denote \(I(A, B)=I(A\cap B)\), where \(I(A)\) is the indicator function of the event A.
2 Preliminary lemmas
In this section, we present some lemmas which will be used in the proofs of main results. The following two lemmas are well known (see, for example, Wang et al. [15], Chen and Sung [7] or Lang et al. [11]). The first one is a Marcinkiewicz–Zygmund-Rosenthal type moment inequality for sums of WOD random variables.
Lemma 2.1
Let \(\{X_{n}, n\ge 1\}\) be a sequence of mean zero WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\), and \(E|X_{n}|^{q}<\infty \) for some \(q>1\).
-
(i)
If \(1< q\le 2\), there exists a positive constant \(C_{q}\) depending only on q such that, for all \(n\ge 1\),
$$ E \Biggl\vert \sum_{k=1}^{n} X_{k} \Biggr\vert ^{q}\le C_{q} \Biggl\{ \sum_{k=1}^{n} E \vert X_{k} \vert ^{q} +\bigl(g_{L}(n)+g_{U}(n) \bigr)\sum_{k=1}^{n} E \vert X_{k} \vert ^{q} \Biggr\} . $$ -
(ii)
If \(q>2\), there exists a positive constant \(C_{q}\) depending only on q such that, for all \(n\ge 1\),
$$ E \Biggl\vert \sum_{k=1}^{n} X_{k} \Biggr\vert ^{q}\le C_{q} \Biggl\{ \sum_{k=1}^{n} E \vert X_{k} \vert ^{q} +\bigl(g_{L}(n)+g_{U}(n) \bigr) \Biggl( \sum_{k=1}^{n} E \vert X_{k} \vert ^{2} \Biggr)^{q/2} \Biggr\} . $$
The following lemma is a Rosenthal type moment inequality for the maximum of partial sums of WOD random variables.
Lemma 2.2
Let \(\{X_{n}, n\ge 1\}\) be a sequence of mean zero WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\), and \(E|X_{n}|^{q}<\infty \) for some \(q>2\). Further assume that there exists a nondecreasing positive function \(g(x)\) on \([0,\infty )\) such that \(\max \{g_{L}(n), g_{U}(n)\}\le g(n)\) for \(n\ge 1\). Then there exists a positive constant \(C_{q}\) depending only on q such that, for all \(n\ge 1\),
The following lemma can be found in Chen and Sung [6].
Lemma 2.3
Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Then
The following lemma is similar to Lemma 2.3.
Lemma 2.4
Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). If \(u>p\) and \(q>\max \{\alpha , (r-1)\beta \}\), then
Proof
The proof is similar to that of Lemma 2.3 (Lemma 2.2 in Chen and Sung [6]).
Case 1: \(\alpha \leq rp\). We observe by the Markov inequality that, for any \(s>0\),
It is easy to show that
Taking an s such that \(\max \{\alpha , (r-1)\beta \}< s< q\), we have
since \(s>(r-1)\beta \). Then (2.1) holds by (2.2)–(2.4).
Case 2: \(\alpha >rp\). The proof is similar to that of Case 1. However, we use a different truncation for X. We observe by the Markov inequality that, for any \(t>0\),
Taking \(0< t< rp\), we have
It is easy to show that
since \(\alpha >rp\). Then (2.1) holds by (2.5)–(2.7). □
The following lemma can be found in Chen and Sung [6].
Lemma 2.5
Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Then, for any \(s>\max \{\alpha , (r-1)\beta \}\),
The following lemma is similar to Lemma 2.5. However, the truncations for X are different, and the term \((\log n)^{s}\) is added in Lemma 2.6.
Lemma 2.6
Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Then, for any \(u>p\) and \(s>\max \{\alpha , (r-1)\beta \}\),
Proof
Since \(u>p\), we have, for any \(0< s'< s\),
Now we choose an \(s'\) such that \(s>s'>\max \{\alpha , (r-1)\beta \}\). Then the result follows directly from Lemma 2.5. □
The following lemma can be found in Chen and Sung [6].
Lemma 2.7
Let \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). If \(E|X|^{p}<\infty \), then
as \(n\rightarrow \infty \).
The following lemma is similar to Lemma 2.7.
Lemma 2.8
Let \(p\ge 1\) and let X be a random variable with \(E|X|^{q}<\infty \) for some \(q>p\). Let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1) for some \(\alpha >p\). Then, for any \(u>p\) such that \(1/u > 1/(q-1) \cdot \max \{ 1-1/p, q/\alpha -1/p\}\),
as \(n\rightarrow \infty \).
Proof
From (1.1), we have
It follows that
as \(n\to \infty \), since \(1/u > 1/(q-1) \cdot \max \{ 1-1/p, q/\alpha -1/p\}\). □
3 Proofs of the main results
In this section, we present proofs of the main results.
Proof of Theorem 1.1
Without loss of generality, we may assume that \(X_{n}\ge 0\) for \(n\ge 1\) and \(a_{nk}\ge 0\) for \(1\le k\le n\) and \(n\ge 1\).
Since \(r>1\) and \(\alpha >p\), we can choose a constant u such that \(1/p >1/u>1/(rp-1)\cdot \max \{1-1/p, rp/\alpha -1/p\}\). For \(1\leq k\leq n\) and \(n\geq 1\), we define
Then \(X_{nk}+Y_{nk}+Z_{nk}=a_{nk}X_{k}\) for \(1\le k\le n\) and \(n\ge 1\), and \(\{X_{nk}, 1\le k\le n\}\), \(\{Y_{nk}, 1\le k\le n\}\), \(\{Z_{nk}, 1\le k\le n\}\) are sequences of WOD random variables. Note that
Then by Lemmas 2.3 and 2.7, to prove (1.4), it is enough to prove that
and
Set \(s\in (p, \min \{2, \alpha \})\) if \(\alpha \leq rp\) and \(s\in (p, \min \{2, rp\})\) if \(\alpha >rp\) (note that such an s cannot be chosen when \(r=1\)). Then \(p< s<\min \{2,\alpha \}\), and \(E|X|^{s}<\infty \). Taking \(q>\max \{2,\alpha , (r-1)\beta , 2p(r-1+\tau )/(s-p)\}\), we have by the Markov inequality and Lemma 2.2
Since \(q>2p(r-1+\tau )/(s-p)\), we have \(r-2+\tau +q(1-s/p)/2<-1\). It follows that
By Lemmas 2.4 and 2.6, we have
Hence (3.1) holds by (3.3)–(3.5).
Now we prove (3.2). By Lemmas 2.7 and 2.8,
as \(n\to \infty \). Hence there exists an integer N such that \(n^{-1/p}\sum_{k=1}^{n} EY_{nk}<\varepsilon /4\) if \(n>N\). It follows that, for \(n>N\),
Then we have by the Markov inequality and Lemma 2.1, for \(n>N\),
As in the proof of (3.4), we obtain
By Lemmas 2.3 and 2.5, we have
Hence (3.2) holds by (3.6)–(3.8). □
Proof of Theorem 1.2
Without loss of generality, we may assume that \(X_{n}\ge 0\) for \(n\ge 1\) and \(a_{nk}\ge 0\) for \(1\le k\le n\) and \(n\ge 1\). For simplicity, we may assume that \(\sum_{k=1}^{n} a_{nk}^{\alpha }\le n\) for \(n\ge 1\). Since \(EX<\infty \), there exists a positive integer N such that \(EXI(X>N)<\varepsilon /4\). For \(n\ge 1\), we define
Then \(\{X_{n}', n\ge 1\}\) is still a sequence of WOD random variables, and \(\{a_{nk}X_{k}', 1\le k\le n\}\) is also a sequence of WOD random variables. To prove (1.5), it is enough to show that
and
Taking \(q>\max \{2, \alpha , \tau /(1-1/\min \{2,\alpha \})\}\), we have by the Markov inequality and Lemma 2.2
since \(q>\tau /(1-1/\min \{2, \alpha \})\).
To prove \(I_{2}<\infty \), we define, for \(1\le k\le n\) and \(n>N\),
Then we can rewrite \(X_{k}''\) as
and so \(X_{k}''=Y_{nk}\) if \(X_{k}\le n\).
Hence we have, for \(n> N\),
Noting that
we have by the Markov inequality, for any \(q>0\),
We now proceed with two cases \(1<\alpha \le 2\) and \(\alpha >2\).
Case 1: \(1<\alpha \le 2\). In this case, we take \(q=\alpha \). Since \(\{a_{nk}Y_{nk}, 1 \le k\le n \}\) is a sequence of WOD random variables, we have by Lemma 2.1
Since \(g(x)/x^{\tau }\downarrow \) and \(0<\tau <\alpha -1\), we have
Since \(0< g(x)\uparrow \) and \(g(x)/x^{\tau }\downarrow \), we also have the following relation (see page 7 in Chen and Sung [7]):
Hence \(I_{2}<\infty \) by (3.9)–(3.12)
Case 2: \(\alpha > 2\). In this case, we take \(q=2\). The proof is similar to that of Case 1 and is omitted. □
Proof of Corollary 1.1
Let \(a_{nk}=k^{s}l(k)/(n^{s} l(n))\) for \(1\le k\le n\) and \(n\ge 1\). Since \(s>-1\), we can take \(\alpha >1\) such that \(\alpha s>-1\). Then
By Theorem 1.2, we obtain
which implies
By the Borel–Cantelli lemma,
From the fact that \(\max_{2^{i} \le n<2^{i+1}} l(n)/l(2^{i})\to 1\) as \(i\to \infty \) (see Bai and Su [1]), we also have \(\min_{2^{i} \le n<2^{i+1}} l(n)/l(2^{i})\to 1\) as \(i\to \infty \), since \(1/l(x)\) is also a slowly varying function. We can prove the result from (3.13) by a standard method. □
Proof of Corollary 1.2
We prove the result with two cases \(r>1\) and \(r=1\).
Case 1: \(r>1\). Since \(s>-1/r\), we can choose \(\alpha >r\) such that \(s\alpha >-1\). Then
Hence (1.6) holds by Theorem 1.1 with \(p=1\).
Case 2: \(r=1\). In this case, \(g(x)/x^{\tau }\downarrow \) for some \(0<\tau <\min \{1, |1+1/s|\}\).
If \(s>-1/2\), then \(0<\tau <1\). In this case, we take \(\alpha =2\). Then
Hence (1.6) holds by Theorem 1.2.
If \(-1< s\le -1/2\), then \(0<\tau <-1-1/s\). In this case, we take α such that \(1+\tau <\alpha <-1/s\). Then
Availability of data and materials
Not applicable.
References
Bai, Z., Su, C.: The complete convergence for partial sums of iid random variables. Sci. Sin., Ser. A 28, 1261–1277 (1985)
Bai, Z.D., Cheng, P.E.: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 46, 105–112 (2000)
Baum, L.E., Katz, M.: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120, 108–123 (1965)
Chen, P.: Limiting behavior of weighted sums of negatively associated random variables. Math. Acta Sin. A 25, 489–495 (2005)
Chen, P., Gan, S.: Limiting behavior of weighted sums of i.i.d. random variables. Stat. Probab. Lett. 77, 1589–1599 (2007)
Chen, P., Sung, S.H.: On complete convergence and complete moment convergence for weighted sums of \(\rho ^{*}\)-mixing random variables. J. Inequal. Appl. 2018, 121 (2018)
Chen, P., Sung, S.H.: A Spitzer-type law of large numbers for widely orthant dependent random variables. Stat. Probab. Lett. 154, 108544 (2019)
Chow, Y.S.: Some convergence theorems for independent random variables. Ann. Math. Stat. 37, 1482–1493 (1966)
Cuzick, J.: A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 8, 625–641 (1995)
Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25–31 (1947)
Lang, J., He, T., Cheng, L., Lu, C., Wang, X.: Complete convergence for weighted sums of widely orthant-dependent random variables and its statistical application. Rev. Mat. Complut. (2020). https://doi.org/10.1007/s13163-020-00369-5
Liang, H.Y.: Complete convergence for weighted sums of negatively associated random variables. Stat. Probab. Lett. 48, 317–325 (2000)
Sung, S.H.: Complete convergence for weighted sums of \(\rho ^{*}\)-mixing random variables. Discrete Dyn. Nat. Soc. 2010, Article ID 630608 (2010)
Wang, K., Wang, Y., Gao, Q.: Uniform asymptotics for the finite-time ruin probability of a dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15, 109–124 (2013)
Wang, X., Xu, C., Hu, T.-C., Volodin, A., Hu, S.: On complete convergence for widely orthant-dependent random variables and its applications in nonparametric regression models. Test 23, 607–629 (2014)
Wang, Y., Wang, X.: Complete f-moment convergence for Sung’s type weighted sums and its application to the EV regression models. Stat. Pap. (2019). https://doi.org/10.1007/s00362-019-01112-z
Wu, Y., Wang, X., Hu, S.: Complete moment convergence for weighted sums of weakly dependent random variables and its application in nonparametric regression model. Stat. Probab. Lett. 127, 56–66 (2017)
Wu, Y., Wang, X., Shen, A.: Strong convergence properties for weighted sums of m-asymptotic negatively associated random variables and statistical applications. Stat. Pap. (2020). https://doi.org/10.1007/s00362-020-01179-z
Wu, Y., Zhai, M., Peng, J.Y.: On the complete convergence for weighted sums of extended negatively dependent random variables. J. Math. Inequal. 13, 251–260 (2019)
Zhang, A., Yu, Y., Yang, R., Shen, Y.: On the complete convergence of weighted sums for widely orthant dependent random variables. J. Math. Inequal. 12, 1063–1074 (2018)
Acknowledgements
The authors would like to thank the referees for the helpful comments.
Funding
The research of Pingyan Chen is supported by the National Natural Science Foundation of China (No. 71471075). The research of Soo Hak Sung is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1F1A1A01050160).
Author information
Authors and Affiliations
Contributions
All authors read and approved the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, P., Sung, S.H. Complete convergence for weighted sums of widely orthant-dependent random variables. J Inequal Appl 2021, 45 (2021). https://doi.org/10.1186/s13660-021-02574-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02574-2