Introduction

Normal distribution is considered the most distribution used in our real life. Many new distributions are derived from normal distribution using different transformations. Two parameter Birnbaum–Saunders (BS) distribution is considered one of these distributions. 1introduced the BS distribution as a statistical model for fatigue life of structures under cyclic stress. In the recent years the BS distribution is used in many fields to its theoretical arguments associated with cumulative damage processes, its properties, and its relationship with the normal distribution. BS distribution is unimodal, positively skewed also it investigated for applications in engineering by many authors see2,3,4,5. Also, BS distribution has many applications in other fields such as business, environment and medicine see 6,7,8,9,10,11,12,13,14,15,16,17,18,19. Also, the BS distribution can be obtained as an approximation of inverse Gaussian (IG) distribution see20, it can see equal mixture of an inverse Gaussian and its reciprocal see21, Many statistical properties of BS distribution is studied by many authors such that probability density function pdf, hazard function (hf) because it plays an important role in lifetime data see22,23,24,25.

Definition 1:

A random variable \(X\) is said to be Birnbaum–Saunders distribution with shape parameter \(\alpha >0\), scale parameter \(\beta >0\) and denoted by \(X\sim BS(\alpha ,\beta )\), if the probability density function (pdf) and the cumulative distribution (cdf) of \(X\) are defined as follows respectively.

$$ f_{BS} \left( {x;\alpha ,\beta } \right) = \frac{1}{{\sqrt {2 \pi } }}\exp \left( {\frac{ - 1}{{2 \alpha^{2} }} \left( {\frac{x}{\beta } + \frac{\beta }{x} - 2} \right)} \right) \frac{{x^{{ - 3/2 \left( {x + \beta } \right)}} }}{2 \alpha \sqrt \beta },\quad x > 0, $$
(1)
$$ F_{BS} \left( {x;\alpha ,\beta } \right) = \frac{1}{2} \left( {1 + Erf\left( {\frac{\beta x - 1}{{\alpha \sqrt {2 \beta x} }}} \right)} \right). $$
(2)

where, \(Erf\) is the error function.

Figure 1 shows the pdf of \(BS(\alpha ,\beta )\) for different values of shape parameter \(\alpha \) we can the BS distribution is unimodal distribution. Also, Fig. 2 shows the cdf of \(BS(\alpha ,\beta )\) for different values shape parameter \(\alpha \). Two Figures show the changes in the distribution curve when the shape parameter takes different values.

Figure 1
figure 1

The pdf of \(BS(\alpha ,\beta )\) for different values of shape parameter \(\alpha \).

Figure 2
figure 2

The cdf of \(BS(\alpha ,\beta )\) for different values of shape parameter \(\alpha \).

The hazard function of BS distribution is defined as,

$$ \begin{aligned} hf_{BS} \left( {x;\alpha ,\beta } \right) & = \frac{{f_{BS} \left( {x;\alpha ,\beta } \right)}}{{1 - F_{BS} \left( {x;\alpha ,\beta } \right)}}, \\ & = \frac{{{\text{e}}^{{ - \frac{{\left( { - 1 + x\beta } \right)^{2} }}{{2x\alpha^{2} \beta }}}} \left( {1 + x\beta } \right)}}{{\sqrt {2\pi } \alpha \sqrt {x^{3} \beta } {\text{Erfc}}\left[ {\frac{ - 1 + x\beta }{{\sqrt 2 \alpha \sqrt {x\beta } }}} \right]}}. \\ \end{aligned} $$
(3)

Figure 3 shows the hf of \(BS(\alpha ,\beta )\) for different values of shape parameter \(\alpha .\) The Figure shows the changes in the distribution curve when the shape parameter takes different values.

Figure 3
figure 3

The hf of \(BS(\alpha ,\beta )\) for different values of shape parameter \(\alpha .\)

The main reason of choosing BS distribution as a fatigue failure life distribution by 1 is known that for the analysis fatigue data used any two-parameter distribution such as Weibull, log-normal and gamma distributions. For importance of BS distribution, we proposed in this paper a new distribution called neutrosophic Birnbaum–Saunders distribution and denoted by \(NBS({\alpha }_{N},{\beta }_{N}). NBS(\alpha ,\beta )\). In the literature of neutrosophic statistics, the start work in neutrosophic statistics is introduced by 26 when he showed that the neutrosophic logic is more efficient than fuzzy logic. Also, Smarandache27 present the neutrosophic statistics and showed also it is more efficient than classical statistics. Neuterosophic statistics is considered as the generalized of classical statistics and it is reduced to classical statistics when imprecise observations in the data. For the efficient of neutrosophic statistics see also28,29,30. Many authors introduced the neutrosophic probability distributions such as Poisson, exponential, binomial, normal, uniform, Weibull and so on see27,31,32,33,34,35 introduced the neutrosophic queueing theory in stochastic modeling.36,37 and38 investigated the neutrosophic time series. Recently, many authors studied the neutrosophic random variables see39,40 inserted the new notions on neutrosophic random variables. Granados and Sanabria41 studied independence neutrosophic random variables. Neutrosophic has many applications in many fields such as decision making, machine learning, intelligent disease diagnosis, communication services, pattern recognition, social network analysis and e-learning systems, physics, sequences spaces and so on for more details see27,42,43,44,45,46,47,48,49,50,51 and52. This paper is organized as follows, in section “Neutrosophic Birnbaum–Saunders distribution and its statistical properties”, we introduce the new distribution \(NBS({\alpha }_{N},{\beta }_{N})\). and derived its statistical properties. In section “Parameter estimation”, Bayesian and non-Bayesian estimation methods are discussed to estimate the parameters of new distribution. In section “Simulation and comparative study”, the Monte-Carlo simulation and comparative study is performed to investigate the behavior of different estimates for the parameters of our distribution and compare between different estimates of parameters of new distribution. In section “Comparative study using real application”, real life data analysis is introduced. Finally, in section “Conclusion”, the conclusion of our study is introduced.

Neutrosophic Birnbaum–Saunders distribution \(NBS({\alpha }_{N},{\beta }_{N})\). and its statistical properties

In this section, we introduce the new distribution which called neutrosophic Birnbaum–Saunders distribution and denoted by \(NBS({\alpha }_{N},{\beta }_{N})\). where \(\alpha \) is a shape parameter and \(\beta \) is the scale parameter. We use Mathematica 13.1 in all calculations in this section, for more details see53.

Definition 2:

(The neutrosophic probability density function and neutrosophic cumulative distribution function of \(NBS({\alpha }_{N},{\beta }_{N})\))

Let \({I}_{N}\in ({I}_{L},{I}_{U})\) be an indeterminacy interval, where \(N\) is the neutrosophic statistical number and let \({X}_{N}={X}_{L}+{X}_{U} {I}_{N}\) be a random variable following neutrosophic Birnbaum–Saunders with scale parameter \({\beta }_{N}\) and shape parameter \({\alpha }_{N}\). If the neutrosophic probability density function \((npdf\)) and neutrosophic cumulative distribution function \((ncdf)\) are defined as follows respectively,

$$ f\left( {x_{N} } \right) = \frac{{{\text{exp}}\left( {\frac{{ - \left( {\beta_{N} x_{Ni} - 1} \right)^{2} }}{{2 \alpha_{N}^{2} \beta_{N} x_{Ni} }}} \right)\beta_{N} \left( {1 + \beta_{N} x_{N} } \right)}}{{2\sqrt {2\pi } \alpha \sqrt {\beta_{N} x_{N}^{3} } }}\left( {1 + I_{N} } \right),\quad x_{N} > 0,\alpha_{N} > 0, \beta_{N} > 0. $$
(4)
$$ F\left( {X_{N} } \right) = \frac{1}{2}\left( {1 + {\text{Erf}}\left[ {\frac{{ - 1 + \beta_{N} x_{N} }}{{\sqrt 2 \alpha_{N} \sqrt {\beta_{N} x_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right),\quad x_{N} > 0,\alpha_{N} > 0, \beta_{N} > 0. $$
(5)

Note that, the neutrosophic distribution go to the classical distribution when \({I}_{N}=0\). Figures 4 and 5 show \(npdf\) and \(ncdf\) for different values of \({\alpha }_{N}\) and \({\beta }_{N}.\) Two show the changes in the distribution curve when the shape parameter takes different values. Also we can see the effect of indeterminacy parameter on curves.

Figure 4
figure 4

The \(npdf\) for different values of \({\alpha }_{N}\) and \({\beta }_{N}\).

Figure 5
figure 5

The \(ncdf\) for different values of \({\alpha }_{N}\) and \({\beta }_{N}\).

Definition 3:

(The neutrosophic reliability function and neutrosophic hazard function of \(NBS({\alpha }_{N},{\beta }_{N})\))

The neutrosophic reliability function of \({X}_{N}\) is a random variable following neutrosophic Birnbaum–Saunders with scale parameter \({\beta }_{N}\) and shape parameter \({\alpha }_{N}\) is defined as,

$$ \begin{aligned} R\left( {x_{N} } \right) & = 1 - F\left( {x_{N} } \right), \\ & = 1 - \frac{ 1}{2}\left( {1 + {\text{Erf}}\left[ {\frac{{ - 1 + \beta_{N} x_{N} }}{{\sqrt 2 \alpha_{N} \sqrt {\beta_{N} x_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right),\quad x_{N} > 0, \alpha_{N} > 0,\beta_{N} > 0. \\ \end{aligned} $$
(6)

and the neutrosophic hazard function of \({X}_{N}\) is defined as,

$$ h\left( {x_{N} } \right) = \left( {\left( {{\text{e}}^{{ - \frac{{\left( { - \frac{1}{{\sqrt {\beta_{N} x_{N} } }} + \sqrt {\beta_{N} x_{N} } } \right)^{2} }}{{2\alpha_{N}^{2} }}}} \left( {1 + \beta_{N} x_{N} } \right)} \right)/\left( {\sqrt {2\pi } \alpha {\text{Erfc}}\left[ {\frac{{ - \frac{1}{{\sqrt {\beta_{N} x_{N} } }} + \sqrt {\beta_{N} x_{N} } }}{{\sqrt 2 \alpha_{N} }}} \right]\sqrt {\beta_{N} x_{N}^{3} } } \right)} \right)\left( {1 + I_{N} } \right). $$
(7)

and denoted by \(nhf\). Figure 6 shows the \(nhf\) for different values of \({\alpha }_{N}\) and \({\beta }_{N}.\) The Figure show the changes in the distribution curve when the shape parameter takes different values. Also we can see the effect of indeterminacy parameter on curves.

Figure 6
figure 6

The nhf for different values of \({\alpha }_{N}\) and \({\beta }_{N}\).

Now,we discuss some statistical properties of new proposed distribution \(NBS({\alpha }_{N},{\beta }_{N})\) such as mode, median, moments, moment generating function, quantile function, order statistics, entropy.

I. Mode:

To find the mode of neutrosophic Birnbaum- Saunders distribution solve the following nonlinear equation with respect to \({x}_{N}\),

$$ \frac{{{\text{exp}}\left( {\frac{{ - \left( {\beta_{N} x_{Ni} - 1} \right)^{2} }}{{2 \alpha_{N}^{2} \beta_{N} x_{Ni} }}} \right)\left( {1 + I_{N} } \right)x_{N} \left( { - 1 + \beta_{N} x_{N} \left( { - 1 + 3\alpha_{N}^{2} + \beta_{N} x_{N} \left( {1 + \alpha_{N}^{2} + \beta_{N} x_{N} } \right)} \right)} \right)}}{{4\sqrt {2\pi } \alpha_{N}^{3} \left( {\beta_{N} x_{N}^{3} } \right)^{3/2} }} = 0. $$

Then, the mode at \({x}_{N}={\text{Root}}[-1+(-{\beta }_{N}+3{{\alpha }_{N}}^{2}\beta )\#1+({{\beta }_{N}}^{2}+{{\alpha }_{N}}^{2}{{\beta }_{N}}^{2}){\#1}^{2}+{{\beta }_{N}}^{3}{\#1}^{3}\&,1]\)

Where, \(0<{\alpha }_{N}<{\text{Root}}[-64+64{\#1}^{2}-92{\#1}^{4}+9{\#1}^{6}\&,\mathrm{2,0}]\&\&{\beta }_{N}>0)||({\text{Root}}[-64+64{\#1}^{2}-92{\#1}^{4}+9{\#1}^{6}\&,\mathrm{1,0}]<{\alpha }_{N}<0\&\&{\beta }_{N}>0)||{\beta }_{N}<0\).

When \({\alpha }_{N}=1.5, \beta =2\) then \({x}_{N}=0.0794\).

II. Median

The median of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by.

$$P\left[{X}_{N}<m\right]={\int }_{0}^{m}f\left({x}_{N}\right)d{x}_{N}=0.5$$
$$\frac{1}{2}(1+{\text{Erf}}[\frac{-1+{\beta }_{N}\mathrm{ m}}{\sqrt{2}{\alpha }_{N}\sqrt{{\beta }_{N}\mathrm{ m}}}])(1+{I}_{N})=0.5$$
$$ \begin{aligned} m & = \left( {\frac{{0.5\left( { - \beta_{N} \left( { - 2 - 2\alpha_{N}^{2} {\text{InverseErf}}\left[ { - \frac{{I_{N} }}{{1 + I_{N} }}} \right]^{2} } \right) - \sqrt { - 4\beta_{N}^{2} + \beta_{N}^{2} \left( { - 2 - 2\alpha_{N}^{2} {\text{InverseErf}}\left[ { - \frac{{I_{N} }}{{1 + I_{N} }}} \right]^{2} } \right)^{2} } } \right)}}{{\beta_{N}^{2} }},} \right. \\ & \quad \left. {\frac{{0.5\left( { - \beta_{N} \left( { - 2 - 2\alpha_{N}^{2} {\text{InverseErf}}\left[ { - \frac{{I_{N} }}{{1 + I_{N} }}} \right]^{2} } \right) + \sqrt { - 4\beta_{N}^{2} + \beta_{N}^{2} \left( { - 2 - 2\alpha_{N}^{2} {\text{InverseErf}}\left[ { - \frac{{I_{N} }}{{1 + I_{N} }}} \right]^{2} } \right)^{2} } } \right)}}{{\beta_{N}^{2} }}} \right). \\ \end{aligned} $$

When \({\alpha }_{N}=1.5, {\beta }_{N}=2, { I}_{N}=0.2\) then \(m=(\mathrm{0.3651,0.6846})\).

III. r-th moments of origin

The r-th moments of origin of \(NBS({\alpha }_{N},{\beta }_{N})\) is defined as

$$ \begin{aligned} E\left[ {X_{N}^{r} } \right] & = \mathop \smallint \limits_{0}^{\infty } x_{N}^{r} f\left( {x_{N} } \right)dx_{N} , \\ & = \frac{{{\text{exp}}\left( {1/\alpha_{N}^{2} } \right)\sqrt {\beta_{N} } \left( {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \right)^{{ - \frac{5}{4} - \frac{r}{2}}} \left( {\alpha_{N}^{2} \beta_{N} } \right)^{{ - \frac{1}{4} - \frac{r}{2}}} \left( {\beta_{N} {\text{BesselK}}\left[ { - \frac{1}{2} - r,\frac{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right] + \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } {\text{BesselK}}\left[ {\frac{1}{2} - r,\frac{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)}}{{\sqrt {2\pi } \alpha_{N}^{3} }}. \\ \end{aligned} $$

IV. Mean

The mean of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by,

$$ \begin{aligned} E\left[ {X_{N} } \right] & = \mathop \smallint \limits_{0}^{\infty } x_{N} f\left( {x_{N} } \right)dx_{N} , \\ & = \frac{{ {\text{exp}}\left( {\beta_{N} - \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } /\alpha_{N}^{2} \beta_{N} } \right) \left( {\frac{{\left( {1 + \alpha_{N}^{2} } \right)\beta_{N} }}{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }} + \sqrt {\alpha_{N}^{2} \beta_{N} } } \right)\left( {1 + I_{N} } \right)}}{{2\alpha_{N} \beta_{N}^{3/2} }}. \\ \end{aligned} $$

V. Variance

The Variance of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by.

$$ \begin{aligned} Var\left[ {X_{N} } \right] & = E\left[ {X_{N}^{2} } \right] - \left( {E\left[ {X_{N} } \right]} \right)^{2} , \\ & = \frac{{\exp \left( {\beta_{N} - \frac{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } }}{{\alpha_{N}^{2} \beta_{N} }}} \right)\alpha_{N} \left( {\beta_{N} + 3\alpha_{N}^{2} \beta_{N} + \left( {1 + \alpha_{N}^{2} + 3\alpha_{N}^{4} } \right)\sqrt {\frac{{\beta_{N} }}{{\alpha^{2} }}} \sqrt {\alpha^{2} \beta_{N} } } \right)\left( {1 + I_{N} } \right)}}{{2\beta^{5/2} \sqrt {\alpha^{2} \beta } }} \\ & \quad - \frac{{\exp (\left( {2\beta_{N} - 2\sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2t + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right))/\alpha_{N}^{2} \beta_{N} )\left( {\beta_{N} + \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2t + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)^{2} \left( {1 + I_{N} } \right)^{2} }}{{4\beta_{N} \left( { - 2t\alpha_{N}^{2} + \beta_{N} \beta } \right)}} \\ \end{aligned} $$

VI. Moment generating function

The moment generating function of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by,

$$ \begin{aligned} M_{{X_{N} }} \left( t \right) & = E\left[ {e^{{t X_{N} }} } \right], \\ & = \frac{{{\text{exp}}\left( {\left( {\beta_{N} - \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2t + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)/\alpha_{N}^{2} \beta_{N} } \right)\left( {\beta_{N} + \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2t + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)\left( {1 + I_{N} } \right)}}{{2\alpha_{N} \sqrt {\beta_{N} } \sqrt { - 2t + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}. \\ \end{aligned} $$

VII. Characteristic function

The characteristic function of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by,

$$ \begin{aligned} {\Phi }_{{X_{N} }} \left( t \right) & = E\left[ {e^{{i t X_{N} }} } \right], \\ & = \frac{{ {\text{exp}}\left( {\left( {\beta_{N} - \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2it + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)/\alpha_{N}^{2} \beta_{N} } \right)\left( {1 + I_{N} } \right)\left( {\beta_{N} + \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2it + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)}}{{2\alpha_{N} \sqrt {\beta_{N} } \sqrt { - 2it + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}. \\ \end{aligned} $$

where \(i=\sqrt{-1}\).

VIII. Cumulant generating function

The cumulant generating function of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by.

$$ \begin{aligned} C_{{X_{N} }} \left( t \right) & = {\text{Log}}[{\Phi }_{{X_{N} }} \left( t \right)], \\ & = Log\left[ {\frac{{{\text{exp}}\left( {\left( {\beta_{N} - \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2it + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)\backslash \alpha_{N}^{2} \beta_{N} } \right)\left( {1 + I_{N} } \right)\left( {\beta_{N} + \sqrt {\alpha_{N}^{2} \beta_{N} } \sqrt { - 2it + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)}}{{2\alpha_{N} \sqrt {\beta_{N} } \sqrt { - 2it + \frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}} \right]. \\ \end{aligned} $$

where \(i=\sqrt{-1}.\)

IX. Quantile function

The quantile function of \(NBS({\alpha }_{N},{\beta }_{N})\) is given by.

$$F\left({Q}_{{X}_{N}}\left(p\right)\right)=P,$$
$$ P = \left( {\frac{{2 \beta_{N} \left( {1 + k^{2} \alpha_{N}^{2} } \right) - \sqrt {4 \beta_{N}^{2} \left( {1 + k^{2} \alpha_{N}^{2} } \right) - 4\beta_{N}^{2} } }}{{2\beta_{N}^{2} }}, \frac{{2 \beta_{N} \left( {1 + k^{2} \alpha_{N}^{2} } \right) + \sqrt {4 \beta_{N}^{2} \left( {1 + k^{2} \alpha_{N}^{2} } \right) - 4\beta_{N}^{2} } }}{{2\beta_{N}^{2} }}} \right) $$

where, \(k^{2} = {\text{InverseErf}}\left[ {\frac{{2 P - 1 - I_{N} }}{{1 + I_{N} }}} \right]^{2}\).

X. Order statistics

For given any random variables \({X}_{N1}\dots {X}_{NN}\), the order statistics. \({X}_{N(1)}\dots {X}_{N(N)}\) are also random variables, defined by sorting the values of \({X}_{N1}\dots {X}_{NN}\) in increasing order. For a random sample \({X}_{N(1)}\dots {X}_{N(N)}\) the npdf \({f}_{{X}_{N}\left(r\right)}\left({x}_{N}\right)\) and ncdf \({F}_{{X}_{N}\left(r\right)}\left({x}_{N}\right)\) are defined as follows:

$$ \begin{aligned} f_{{X_{N} \left( r \right)}} \left( {x_{N} } \right) & = \frac{n!}{{\left( {r - 1} \right)!\left( {n - r} \right)!}} f_{{X_{N} }} \left( {x_{N} } \right) \left[ {F_{{X_{N} }} \left( {x_{N} } \right)} \right]^{r - 1} \left[ {1 - F_{{X_{N} }} \left( {x_{N} } \right)} \right]^{n - r} , \\ & = \frac{n!}{{\left( {r - 1} \right)!\left( {n - r} \right)!}} \left( {\frac{{{\text{exp}}\left( {\frac{{ - \left( {\beta_{N} x_{Ni} - 1} \right)^{2} }}{{2 \alpha_{N}^{2} \beta_{N} x_{Ni} }}} \right)\beta_{N} \left( {1 + \beta_{N} x_{N} } \right)}}{{2\sqrt {2\pi } \alpha \sqrt {\beta_{N} x_{N}^{3} } }}\left( {1 + I_{N} } \right)} \right) \\ & \quad \left( {\frac{1}{2}\left( {1 + {\text{Erf}}\left[ {\frac{{ - 1 + \beta_{N} x_{N} }}{{\sqrt 2 \alpha_{N} \sqrt {\beta_{N} x_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)} \right)^{r - 1} \left( {1 - \frac{1}{2}\left( {{\text{Erf}}\left[ {\frac{{ - 1 + \beta_{N} x_{N} }}{{\sqrt 2 \alpha_{N} \sqrt {\beta_{N} x_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)} \right)^{n - r} . \\ \end{aligned} $$
$$ \begin{aligned} F_{{X_{N} \left( r \right)}} \left( {x_{N} } \right) & = \mathop \sum \limits_{j = r}^{n} \left( {\begin{array}{*{20}c} n \\ j \\ \end{array} } \right) \left[ {F_{{X_{N} }} \left( {x_{N} } \right)} \right]^{j} \left[ {1 - F_{{X_{N} }} \left( {x_{N} } \right)} \right]^{n - j} , \\ & = \mathop \sum \limits_{j = r}^{n} \left( {\begin{array}{*{20}c} n \\ j \\ \end{array} } \right)\left( {\frac{1}{2}\left( {1 + {\text{Erf}}\left[ {\frac{{ - 1 + \beta_{N} x_{N} }}{{\sqrt 2 \alpha_{N} \sqrt {\beta_{N} x_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)} \right)^{j} \\ & \quad \left( {1 - \frac{1}{2}\left( {1 + {\text{Erf}}\left[ {\frac{{ - 1 + \beta_{N} x_{N} }}{{\sqrt 2 \alpha_{N} \sqrt {\beta_{N} x_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)} \right)^{n - j} . \\ \end{aligned} $$

XI. Entropy

Entropy is considered one of the most popular measures of uncertainty.54 introduced the differential entropy \(H(X)\) as follows:

$$H\left(X\right)=-{\int }_{-\infty }^{\infty }f\left(x\right) Log\left(f\left(x\right)\right)dx.$$

Rényi55 introduced Renyi entropy which finds its source in the information theory. He defined the Renyi entropy as follow:

$${I}_{\delta }\left(X\right)=\frac{1}{1-\delta } Log\left[{\int }_{-\infty }^{\infty }{\left(f\left(x\right)\right)}^{\delta } dx\right].$$

Where, \(\delta \ne 1\) and \(\delta >0.\)

Tsallis56 introduced q-entropy which comes from statistical physics. He defined the q-entropy as follows:

$$ H_{\delta } \left( X \right) = \frac{1}{1 - q}{ }\left( {1 - \mathop \smallint \limits_{ - \infty }^{\infty } \left( {f\left( x \right)} \right)^{q} { }dx} \right). $$

Where, \(q\ne 1\) and \(q>0.\) Now, the three entropies are defined for \(NBS({\alpha }_{N},{\beta }_{N})\) as follows:

$$ \begin{aligned} H\left( {X_{N} } \right) & = - Log\left[ k \right] - \frac{3}{2} \left( {\left( {\left( {{\text{exp}}\left( {\left( {\beta_{N} - \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } } \right)\backslash \alpha_{N}^{2} \beta_{N} } \right)\left( {1 + I_{N} } \right)\sqrt {\beta_{N} } } \right.} \right.} \right. \\ & \quad \left( {\sqrt {2\pi } \alpha_{N}^{2} \sqrt {\frac{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}{{\alpha_{N}^{4} \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}} \left( {\beta_{N} + \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } } \right) \left( {{\text{Log}}\left[ {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \right] + {\text{Log}}\left[ {\alpha_{N}^{2} \beta_{N} } \right]} \right)} \right. \\ & \quad + \left( {4 {\text{exp}}\left( {\sqrt {\alpha_{N}^{2} \beta_{N} } /\alpha_{N}^{4} \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)} \right)\beta {\text{BesselK}}^{{\left( {1,0} \right)}} \left[ { - \frac{1}{2},\frac{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right] + {\text{exp}}\left( {\sqrt {\alpha_{N}^{2} \beta_{N} } /\alpha_{N}^{4} \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} } \right)\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \\ & \quad \left. {\left. {\left. {\left. {\sqrt {\alpha_{N}^{2} \beta_{N} } {\text{BesselK}}^{{\left( {1,0} \right)}} \left[ {\frac{1}{2},\frac{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right]} \right)} \right)/\left( {4\sqrt {2\pi } \alpha_{N}^{3} \left( {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \right)^{5/4} \left( {\alpha_{N}^{2} \beta_{N} } \right)^{1/4} } \right)} \right)} \right) \\ & \quad - \frac{{\left( {1 + I_{N} } \right) ( {\text{exp}}\left( {\left( {\beta_{N} - \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } } \right)\backslash \alpha_{N}^{2} \beta_{N} } \right)\left( {\frac{{\beta_{N} }}{{\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }} + \sqrt {\alpha_{N}^{2} \beta_{N} } } \right)}}{{4\alpha_{N} \sqrt {\beta_{N} } }} - \mathop \smallint \limits_{ - \infty }^{\infty } Log\left[ {1 - \beta x_{N} } \right]f\left( x \right)dx, \\ \end{aligned} $$

\(\begin{aligned} I_{\delta } \left( {X_{N} } \right) & = \frac{1}{1 - \delta }Log\left[ {k^{\delta } \mathop \sum \limits_{c = 0}^{\delta } \left( {\begin{array}{*{20}c} \delta \\ c \\ \end{array} } \right) \frac{{\beta_{N}^{c} }}{{\sqrt {2\pi } \alpha_{N} \sqrt {\beta_{N} } }}\left( {{\text{exp}}\left( {2/\alpha_{N}^{2} } \right)\left( {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \right)^{{\frac{1}{4}\left( { - 1 - 2c + 3\delta } \right)}} \left( {\alpha_{N}^{2} \beta_{N} } \right)^{{\frac{1}{4}\left( { - 1 - 2c + 3\delta } \right)}} } \right.} \right. \\ & \quad \left. {\left. {\left( {\beta {\text{BesselK}}\left[ {\frac{1}{2}\left( { - 1 - 2c + 3\delta } \right),\frac{{2\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right] + \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } {\text{BesselK}}\left[ {\frac{1}{2}\left( {1 - 2c + 3\delta } \right),\frac{{2\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)} \right)} \right] \\ \end{aligned}\)where, \(k=\frac{1+{I}_{N}}{2 {\alpha }_{N} \sqrt{2 {\beta }_{N} \pi }}.\)

$$ \begin{aligned} H_{Q} \left( {X_{N} } \right) & = \frac{1}{1 - q} \left( {1 - \left( {k^{q} \mathop \sum \limits_{c = 0}^{q} \left( {\begin{array}{*{20}c} q \\ c \\ \end{array} } \right) \frac{{\beta_{N}^{c} }}{{\sqrt {2\pi } \alpha_{N} \sqrt {\beta_{N} } }}} \right.} \right. \\ & \quad \left( {{\text{exp}}\left( {2/\alpha_{N}^{2} } \right)\left( {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \right)^{{\frac{1}{4}\left( { - 1 - 2c + 3q} \right)}} \left( {\beta {\text{BesselK}}\left[ {\frac{1}{2}\left( { - 1 - 2c + 3q} \right),\frac{{2\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right]} \right.} \right. \\ & \quad \left. {\left. {\left. { + \sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} \sqrt {\alpha_{N}^{2} \beta_{N} } {\text{BesselK}}\left[ {\frac{1}{2}\left( {1 - 2c + 3q} \right),\frac{{2\sqrt {\frac{{\beta_{N} }}{{\alpha_{N}^{2} }}} }}{{\sqrt {\alpha_{N}^{2} \beta_{N} } }}} \right]} \right)\left( {1 + I_{N} } \right)} \right)} \right). \\ \end{aligned} $$

where, \(k=\frac{1+{I}_{N}}{2 {\alpha }_{N} \sqrt{2 {\beta }_{N} \pi }}.\)

Parameter estimation

In this section, maximum likelihood and Bayesian estimation methods were used to estimate the parameters of our new proposed distribution neutrosophic Birnbaum–Saunders distribution.

Maximum likelihood estimation method

Let \({X}_{N1}\dots {X}_{Nn}\) be a random sample from \(NBS({\alpha }_{N},{\beta }_{N})\). Then the likelihood function is given by.

\(\begin{aligned} L\left( {x_{Ni} ;\alpha_{N} ,\beta_{N} } \right) & = \mathop \prod \limits_{i = 1}^{n} f\left( {x_{Ni} ;\alpha_{N} ,\beta_{N} } \right). \\ & = \mathop \prod \limits_{i = 1}^{n} \frac{{\left( {1 + I_{N} } \right)}}{{2 \alpha_{N} \sqrt {2 \pi x_{Ni}^{3} } }} \left( {1 + \beta_{N} x_{Ni} } \right)\exp \left( {\frac{{ - \left( {\beta_{N} x_{Ni} - 1} \right)^{2} }}{{2 \alpha_{N}^{2} \beta_{N} x_{Ni} }}} \right), \\ \end{aligned}\)

and its corresponding log-likelihood function is given by,

$$ \begin{aligned} logL\left( {x_{Ni} ;\alpha_{N} ,\beta_{N} } \right) & = n\left( {\log \left( {1 + I_{N} } \right) - \log \left( 2 \right) - \frac{1}{2}\log \left( {2\pi } \right)} \right) - n\left( {\log \left( {\alpha_{N} } \right) + \frac{1}{2}\log \left( {\beta_{N} } \right)} \right) \\ & \quad - \frac{3 n}{2} \mathop \sum \limits_{i = 1}^{n} Log\left( {x_{Ni} } \right) + \mathop \sum \limits_{i = 1}^{n} \log \left( {1 + \beta_{N} x_{Ni} } \right) - \frac{1}{{2 \alpha_{N}^{2} \beta_{N} }} \mathop \sum \limits_{i = 1}^{n} \frac{{\left( {\beta_{N} x_{Ni} - 1} \right)^{2} }}{{x_{Ni} }}. \\ \end{aligned} $$

Now get the derivatives of log-likelihood function with respect to \({\alpha }_{N}\) and \({\beta }_{N}\) to get the maximum likelihood estimates for \({\alpha }_{N}\) and \({\beta }_{N}\) which denoted by \({\widehat{\alpha }}_{N}\) and \({\widehat{\beta }}_{N}\) as follows:

$$\frac{\partial logL\left({x}_{Ni};{\alpha }_{N},{\beta }_{N}\right)}{\partial {\alpha }_{N}}=-\frac{n}{{\alpha }_{N}}+\frac{1}{{{\alpha }_{N}}^{3}{\beta }_{N}}\sum_{i=1}^{n}\frac{{\left(-1+{\beta }_{N}{x}_{Ni}\right)}^{2}}{{x}_{Ni}},$$
$$ \frac{{\partial logL\left( {x_{Ni} ;\alpha_{N} ,\beta_{N} } \right)}}{{\partial \beta_{N} }} = - \frac{n}{{2\beta_{N} }} + \frac{{\mathop \sum \nolimits_{i = 1}^{n} \frac{{\left( { - 1 + \beta_{N} x_{Ni} } \right)^{2} }}{{x_{Ni} }}}}{{2\alpha_{N}^{2} \beta_{N}^{2} }} + \mathop \sum \limits_{i = 1}^{n} \frac{{x_{Ni} }}{{1 + \beta_{N} x_{Ni} }} - \frac{{\mathop \sum \nolimits_{i = 1}^{n} \frac{{ - 2x_{Ni} + 2\beta_{N} x_{Ni}^{2} }}{{x_{Ni} }}}}{{2\alpha_{N}^{2} \beta_{N} }}. $$

Normal equations can’t solve analytic. So, we can’t get the closed form for \({\widehat{\alpha }}_{N}\) and \({\widehat{\beta }}_{N}\). Hence, numerical method is used to solve these equations. Since the maximum likelihood estimates for unknown parameters of new proposed distribution \({\widehat{\alpha }}_{N}\) and \({\widehat{\beta }}_{N}\) can’t get in closed form, so the exact distributions of these parameters not derived, so we derive the asymptotic confidence intervals of these parameters. For large sample and \({\alpha }_{N}>0\) and \({\beta }_{N}>0\). The \({\widehat{\alpha }}_{N}\) and \({\widehat{\beta }}_{N}\) are bivariate normal distribution with the mean \({\alpha }_{N}\) and \({\beta }_{N}\) and covariance matrix \({I}_{n}^{-1}\). Where \({I}_{n}^{-1}\) is the inverse of information matrix, where,

$${I}_{n}^{-1}=\left(\begin{array}{cc}Var({\widehat{\alpha }}_{N})& Cov({\widehat{\alpha }}_{N},{\widehat{\beta }}_{N})\\ Cov({\widehat{\alpha }}_{N},{\widehat{\beta }}_{N})& Var({\widehat{\beta }}_{N})\end{array}\right),$$

For more details see57. Now the \(100\left(1-\gamma \right)\%\) confidence interval of parameters \({\alpha }_{N}\) and \({\beta }_{N}\) are \({\widehat{\alpha }}_{N}\pm {z}_{\gamma /2}\sqrt{Var({\widehat{\alpha }}_{N})}\) and \({\widehat{\beta }}_{N}\pm {z}_{\gamma /2}\sqrt{Var({\widehat{\beta }}_{N})}\) respectively.

Bayesian estimation method

The Bayes estimation using MCMC technique is used to estimate the unknown parameters of new proposed distribution \(NBS({\alpha }_{N},{\beta }_{N})\). For more details of MCMC technique using Gibb sampling procedure see58,59. Also, for more details of MCMC technique using Metropolis Hasting (MH) method see60 and61. The two methods are used to generate samples from the posterior density function to compute point Bayes estimators for unknown parameters and construct credible intervals. For this aim, we suppose that independent gamma prior distributions for unknown parameters of new proposed distribution \(NBS({\alpha }_{N},{\beta }_{N})\) as follows:

$$ \pi_{1} \left( {\alpha_{N} } \right) = \frac{{b^{a} }}{{{\Gamma }\left( a \right)}} \alpha_{N}^{a - 1} {\text{exp}}\left( { - b \alpha_{N} } \right)\quad a > 0, b > 0, \alpha_{N} > 0, $$
$$ \pi_{2} \left( {\beta_{N} } \right) = \frac{{d^{c} }}{{{\Gamma }\left( c \right)}}{ }\beta_{N}^{c - 1} {\text{ exp}}\left( { - d{ }\beta_{N} } \right)\quad c > 0,{ }d > 0,{ }\beta_{N} > 0. $$

In this case the joint prior distribution of \({\alpha }_{N}\) and \({\beta }_{N}\) is given by,

$$\pi \left({\alpha }_{N},{\beta }_{N}\right)={\pi }_{1}\left({\alpha }_{N}\right){\pi }_{2}\left({\beta }_{N}\right),$$

And the joint posterior is given by,

$${\pi }^{*}\left({\alpha }_{N},{\beta }_{N}\left|{X}_{N}\right.\right)=\frac{L\left({\alpha }_{N},{\beta }_{N}\left|{X}_{N}\right.\right) \pi \left({\alpha }_{N},{\beta }_{N}\right)}{{\int }_{0}^{\infty }{\int }_{0}^{\infty }L\left({\alpha }_{N},{\beta }_{N}\left|{X}_{N}\right.\right) \pi \left({\alpha }_{N},{\beta }_{N}\right)d{\alpha }_{N}d{\beta }_{N}}.$$

Under square error loss the Bayes estimates of \({\alpha }_{N}\) and \({\beta }_{N}\) are given by.

$${\alpha }_{N}^{B}={\int }_{0}^{\infty }{\int }_{0}^{\infty }{\alpha }_{N} {\pi }^{*}\left({\alpha }_{N},{\beta }_{N}\left|{X}_{N}\right.\right) d{\beta }_{N}d{\alpha }_{N},$$
$${\beta }_{N}^{B}={\int }_{0}^{\infty }{\int }_{0}^{\infty }{\beta }_{N} {\pi }^{*}\left({\alpha }_{N},{\beta }_{N}\left|{X}_{N}\right.\right) d{\alpha }_{N}d{\beta }_{N}.$$

These estimates can’t be computed analytically. So, we use MCMC method using MH technique to get the \({\alpha }_{N}^{B}\) and \({\beta }_{N}^{B}\) as follows:

  1. i.

    Choose initial values of \({\alpha }_{N}^{(0)}\) and \({\beta }_{N}^{(0)}.\)

  2. ii.

    Suppose the values of \({\alpha }_{N}\) and \({\beta }_{N}\) at the kth step by \({\alpha }_{N}^{(k)}\) and \({\beta }_{N}^{(k)}.\)

  3. iii.

    Generate \({\alpha }_{N}^{(k)}\) using \({\pi }^{*}\left({\alpha }_{N}\left|{\beta }_{N}^{(k-1)},{X}_{N}\right.\right)\) and \({\pi }^{*}\left({\beta }_{N}\left|{{\alpha }_{N}^{(k-1)},X}_{N}\right.\right)\) respectively.

  4. iv.

    Repeat step 3 N-times.

  5. v.

    Compute Bayes estimates of \({\alpha }_{N}\) and \({\beta }_{N}\) as follows:

    $$ \alpha_{N}^{B} = \frac{1}{N - B} \mathop \sum \limits_{k = B + 1}^{N} \alpha_{N}^{\left( k \right)} ,\beta_{N}^{B} = \frac{1}{N - B} \mathop \sum \limits_{k = B + 1}^{N} \beta_{N}^{\left( k \right)} , $$

    where \(B\) is the burn-in period.

  6. vi

    Compute \(\left(100-\gamma \right)\%\) HPD credible intervals for \({\alpha }_{N}\) and \({\beta }_{N}\) as follows:

    $$ \left( {\alpha_{{N\left( {\frac{\gamma }{2}} \right)}} ,\alpha_{{N\left( {1 - \frac{\gamma }{2}} \right)}} } \right),\left( {\beta_{{N\left( {\frac{\gamma }{2}} \right)}} ,\beta_{{N\left( {1 - \frac{\gamma }{2}} \right)}} } \right). $$

Note that we use R-Studio Software to get results in this section using many packages such that, nlme, MASS, coda, mcmc, distr, VGAM and RCPP.

Simulation and comparative study

In this section, we perform Monte-Carlo simulation study to investigate the behavior of two different estimators for parameters of new proposed distribution \(NBS({\alpha }_{N},{\beta }_{N})\) maximum likelihood estimators and Bayesian estimates according to different sample sizes, different start values of \({\alpha }_{N}\) and \({\beta }_{N}\) and different indeterminacy measure. Also, we introduce comparative study to compare between maximum likelihood estimates (MLE’s) and Bayesian estimates to get the best for parameters of new proposed distribution \(NBS({\alpha }_{N},{\beta }_{N})\). Compare between classical and neutrosophic version of BS distribution to show the flexibility of neutrosophic version. Finally, compare Bayesian estimates for different prior distributions. For the aim of comparative study, we use bias and mean square error (MSE) to compare between different point estimators. Use also the Akaike information criterion (AIC) to compare maximum likelihood estimator for classical and neutrosophic version but use asymptotic confidence length (ACL) to compare between different interval estimators. Now we perform these studies according to the following steps:

  1. i.

    Choose the different initial values of \(\left({\alpha }_{N},{\beta }_{N}\right)=\left(\mathrm{1.25,3}\right), \left(\mathrm{0.5,3}\right), \left(\mathrm{1,3}\right).\)

  2. ii.

    For Bayesian estimators choose different values of the parameter for gamma prior follows \(\left(a,b\right)=\left(\mathrm{1,2}\right), \left(\mathrm{1,1}\right), (\mathrm{2,1})\).

  3. iii.

    Use two indeterminacy measure \({I}_{N}=\left(\mathrm{0.2,0.5}\right), (\mathrm{0.6,0.8})\). Note that when \({I}_{N}=0\), the classical version of Birnbaum Saunders is obtained.

  4. iv.

    Generate different sample sizes \(n=50, 100, 200, 500.\)

  5. v.

    Find point estimators using maximum likelihood estimation method and Bayesian estimation method.

  6. vi.

    Calculate asymptotic confidence interval and credible interval.

  7. vii.

    Perform the comparative study by calculating bias, MSE and AIC to compare MLE’s in both cases classical and neutrosophic version of BS distribution, calculating bias, MSE for comparing MLE’s and Bayesian estimates in both cases classical and neutrosophic version of BS distribution. And use ACL for comparing interval estimation.

All calculations in this section we use R package For more details about R-Package see62. Results of simulation and comparative study between our new proposed distribution neutrosophic Birnbaum–Saunders distribution and its classical version Birnbaum–Saunders distribution are shown in Tables 1, 2, 3, 4, 5 and 6 which get using R-Studio Software. From These Tables, we get, Tables 1 and 2 shown bias’s, MSE’s and AIC for MLE’s for \({\text{BS}}(\mathrm{\alpha },\upbeta )\) and \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) respectively, we get \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) has smaller bias’s, MSE’s and AIC than \({\text{BS}}\left(\mathrm{\alpha },\upbeta \right)\) so, we can decided that the neutrosophic version is better than the classical version. Also, in the context of interval estimation the \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) has smaller ACL for asymptotic confidence interval than \({\text{BS}}(\mathrm{\alpha },\upbeta )\). So, For MLE’s the \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) has good behavior than \({\text{BS}}\left(\mathrm{\alpha },\upbeta \right).\) Also, we get for two versions bias’s and MSE’s decrease when sample size increase. For Bayesian estimator results are shown in Tables 3, 4, 5 and 6, we get also, \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) has smaller bias’s and MSE’s than \({\text{BS}}\left(\mathrm{\alpha },\upbeta \right)\) as shown in Tables 5 and 6. Also, for credible intervals \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) has smaller ACL than \({\text{BS}}\left(\mathrm{\alpha },\upbeta \right).\) The behavior of Bayesian estimation got for different three prior distributions. So, we can decide also, \({\text{NBS}}({\mathrm{\alpha }}_{{\text{N}}},{\upbeta }_{{\text{N}}})\) has good behavior than \({\text{BS}}\left(\mathrm{\alpha },\upbeta \right)\).

Table 1 Bias’s, MSE’s, AIC’s and ACL’s for MLE’s for \(BS\left(\alpha ,\beta \right).\)
Tablae 2 Bias’s, MSE’s, AIC’s and ACL’s for MLE’s for \(NBS(\alpha ,\beta )\).
Table 3 The ALC’s for Credible intervals for \(BS\left(\alpha ,\beta \right).\)
Table 4 The ALC’s for credible intervals for \(NBS\left(\alpha ,\beta \right).\)
Table 5 Bias’s and MSE’s for Bayesian estimation for \(BS(\alpha ,\beta )\).
Table 6 Bias’s and MSE’s for Bayesian estimation for \(NBS(\alpha ,\beta )\).

Comparative study using real application

The main aim of this section is a comparative study between the \(NBS({\alpha }_{N},{\beta }_{N})\) and \(BS\left(\alpha ,\beta \right)\). We introduced two real applications as follows:

Application 1

Based on data of alloy melting points. For more details see63 which mentioned that A combination of material constituents, including at least one metal, makes up an alloy. In general, evaluating melting points is quite challenging, therefore observations are indeterministic and can be communicated in intervals as follows:

[563.3, 545.5], [529.4, 511.6], [523.1, 503.5], [470.1,449.2], [506.7, 489.0], [495.6, 479.1], [495.3, 467.9],[520.9, 495.6], [496.9, 472.8], [542.9, 519.1], [505.4,484.0], [550.7, 525.9], [517.7, 500.9], [499.2, 483.0],[500.6, 480.0], [516.8, 499.6], [535.0, 515.1], [489.3,464.4].

Application 2

The data represent the lifetime of batteries. The lifetime in 100hours of 23 batteries is given as:

[2.9,3.99], [5.24,7.2],[6.56,9.02], [7.14,9.82], [11.6,15.96], [12.14,16.69], [12.65,17.4], [13.24,18.21], [13.67,18.79], [13.88,19.09], [15.64,21.51], [17.05,23.45], [17.4,23.93], [17.8,24.48], [19.01,26.14], [19.34,26.59], [23.13,31.81], [23.34,32.09],[26.07,35.84], [30.29,41.65], [43.97,60.46], [48.09,66.13], [73.48,98.04]. For more details see64.

Figure 7 shows the architectural diagram of the proposed algorithm in this section. To show the performance of our new distribution the \(NBS({\alpha }_{N},{\beta }_{N})\). We compare it with its classical version \(BS\left(\alpha ,\beta \right)\) using three statistical criteria log -likelihood (− 2LL), Akaike’s Information Criteria (AIC) and Bayesian Information Criteria (BIC), where,

$$ AIC = - 2 Log L\left( {\underline {{\Theta }}} \right) + 2k,\quad BIC = - 2 Log L\left( {\underline {{\Theta }}} \right) + kLog\left[ n \right]. $$

where, \(\underset{\_}{\Theta }\): the vector of distribution parameters, \(L\left(\underset{\_}{\Theta }\right)\): the likelihood function, \(k\): the number of estimates, n: the data size. The small value of -2LL, AIC and BIC mean good-fit distribution. Table 7 shows the result of comparison between our new distribution and other distribution under classical statistics. The result in Table 7 shows our new distribution is better for this data than its classical version in two applications because all goodness of fit tests having smaller values in neutrosophic version than the classical version.

Figure 7
figure 7

Architectural Diagram of the Proposed Algorithm.

Table 7 The result of comparison between our new distribution \(NBS({\alpha }_{N},{\beta }_{N})\) and \(BS\left(\alpha ,\beta \right)\).

Conclusion

A new distribution introduced which called neutrosophic Birnbaum–Saunders distribution and denoted by \(NBS({\alpha }_{N},{\beta }_{N})\). some statistical properties such as neutrosophic probability density function, neutrosophic cumulative distribution function, neutrosophic hazard function, neutrosophic mean, mode, median, variance, moment of origin, moment generating function, characteristic function, quantile function, cumulant generating function, order statistic and entropies. The neutrosophic maximum likelihood estimators and neutrosophic Bayesian estimators are derived. The simulation study is performed to study the behavior of different estimators at different parameter values and different sample sizes. Also, compare between neutrosophic Birnbaum-Saunders distribution and its classical version using statistical criteria such as bias, MSE, AIC and ACL. Finally, real data of the melting point of alloy and real data of the lifetime of batteries are used to show the validity of \(NBS({\alpha }_{N},{\beta }_{N})\) in real life. Also, compare the performance of \(NBS({\alpha }_{N},{\beta }_{N})\) and \(BS\left(\alpha ,\beta \right)\) on this data using AIC, BIC and − 2LL, which shows the good performance of \(NBS({\alpha }_{N},{\beta }_{N})\) than \(BS\left(\alpha ,\beta \right).\) The neutrosophic field has more points need more and more search so, in future we try to introduce more netrosophic distribution to used solve and describe more real life applications also we will try to introduce many researches in different points in neutrosophic statistics.