Hindawi Publishing Corporation Journal of Probability and Statistics Volume 2013, Article ID 364705, 7 pages http://dx.doi.org/10.1155/2013/364705 Research Article Bayesian and Non-Bayesian Inference for Survival Data Using Generalised Exponential Distribution Chris Bambey Guure and Samuel Bosomprah Department of Biostatistics, School of Public Health, University of Ghana, Legon Accra, Ghana Correspondence should be addressed to Chris Bambey Guure; chris@chrisguure.com Received 29 April 2013; Revised 5 August 2013; Accepted 8 August 2013 Academic Editor: Zhidong Bai Copyright Β© 2013 C. B. Guure and S. Bosomprah. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A two-parameter lifetime distribution was introduced by Kundu and Gupta known as generalised exponential distribution. This distribution has been touted to be an alternative to the well-known 2-parameter Weibull and gamma distributions. We seek to determine the parameters and the survival function of this distribution. The survival function determines the probability that a unit under investigation will survive beyond a certain specified time, say, (π‘₯). We have employed different data sets to estimate the parameters and see how well the distribution can be used to analyse survival data. A comparison is made about the estimators used in this study. Standard errors of the estimators are determined and used for the comparisons. A simulation study is also carried out, and the mean squared errors and absolute bias are obtained for the purpose of comparison. 1. Introduction increasing and decreasing failure rates depending on the shape parameter. As the need grows for conceptualization, formalization, and Some research has been done to compare MLE to that abstraction in biology, so too does mathematics’ relevance of the Bayesian approach in estimating the survival function to the field according to Fagerström et al. [1]. Mathematics and the parameters of the Weibull distribution which are is particularly important for analysing and characterizing similar to the generalised exponential distribution. Amongst random variation, for example, size and weight of individuals others, Sinha [9] determined the Bayes estimates of the in populations, their sensitivity to chemicals, and time-to- reliability function and the hazard rate of the Weibull failure event cases, such as the amount of time an individual needs time distribution by employing only squared error loss to recover from illness. The frequency distribution of such function; Abdel-Wahid and Winterbottom [10] studied the data is a major factor determining the type of statistical approximate Bayesian estimates for the Weibull reliability analysis that can be validly carried out on any data set. Many function and hazard rate from censored data by employing widely used statistical methods, such as ANOVA (analysis a new method that has the potential of reducing the number of variance) and regression analysis, require that the data of terms in Lindley procedure; Guure and Ibrahim [11] did be normally distributed, but only rarely is the frequency some studies onBayesian analysis of the survival function and distribution of data tested when these techniques are used failure rate ofWeibull distributionwith censored data. Huang (Limpert et al.) [2]. and Wu [12] considered Bayesian estimation and prediction Gupta and Kundu [3] recently proposed the two- for Weibull model with progressive censoring. Similar work parameter generalised exponential distribution (𝐺𝐸) as an can be seen in Guure et al. [13], Zellner [14], Guure et al. [15]. alternative to the lognormal, gamma, and Weibull distribu- Al-Aboud [16], Al-Athari [17], and Pandey et al. [18]. tions and did some studies on its properties. Some references Maximum likelihood method is one of the most popular on 𝐺𝐸 distribution are Raqab [4], Raqab and Ahsanullah estimation techniques for many distributions. According [5], Zheng [6], and Kundu and Gupta [7]. According to to the classical statistician, maximum likelihood has some Gupta and Kundu [8], the two-parameter 𝐺𝐸(πœƒ, 𝑝) can have outstanding properties. We have in this paper compared 2 Journal of Probability and Statistics the maximum likelihood estimator and that of the Bayesian prior distribution is most often than not based on the type of estimators to determine the best method that can be used prior information that is available to us. When we have little to estimate the parameters of the generalised exponential or no information about the parameter, a noninformative distribution. prior should be used else an informative prior. In analysing The remaining part of the paper is arranged as follows. data from medical, engineering, or biological studies, it is Section 2 contains materials and methods that deal with possible to obtain information with respect to similar studies the derivative of the parameters under maximum likelihood in the past, and if that is even unattainable, information estimator and that of the Bayesian estimators. Section 3 deals from an expert could be modelled to fit an appropriate prior with analysis of some real data sets, followed by a simulation distribution. This can be referred to as prior elicitation. study in Section 4. Results and discussion are in Section 5, We let the two unknown parameters take on the gamma and then Section 6 is the conclusion. prior distributions by assuming that the hyper parameters are all known and greater than zero, that is, π‘Ž, 𝑏, 𝑐, 𝑑 > 0. The 2. Materials and Methods gamma prior is assumed for this distribution because boththe scale and shape parameters are greater than zero: Let (𝑑 , . . . , 𝑑 ) be the set of 𝑛 random lifetimes with respect to 1 𝑛 the generalised exponential distribution, with 𝑝 and πœƒ as the V π‘βˆ’1(πœƒ) ∝ πœƒ exp (βˆ’πœƒπ‘Ž) , πœƒ, π‘Ž, 𝑏 > 0, parameters, where πœƒ is the scale parameter and 𝑝 is the shape 1 (6) parameter. The cumulative distribution function (𝑐𝑑𝑓), the V π‘‘βˆ’1(𝑝) ∝ 𝑝 exp (βˆ’π‘π‘) , 𝑝, 𝑐, 𝑑 > 0. probability density function (𝑝𝑑𝑓), and the survival function 2 𝑆(𝑑) are given, respectively, as Bayesian inference is based on the posterior distribution exp 𝑝 (1) which is obtained by dividing the joint density function to the𝐹 (𝑑; πœƒ, 𝑝) = {1 βˆ’ (βˆ’πœƒπ‘‘)} πœƒ, 𝑝, 𝑑 > 0, marginal distribution function as given below: the density function: V (πœƒ) V (𝑝) 𝐿 (𝑑 ; πœƒ, 𝑝) 1 2 𝑖 𝑓 (𝑑; πœƒ, 𝑝) = π‘πœƒ{1 βˆ’ exp π‘βˆ’1 βˆ—(βˆ’πœƒπ‘‘)} exp (βˆ’πœƒπ‘‘) , (2) πœ‹ (πœƒ, 𝑝𝑑 ) ∝ .𝑖 ∞ ∞ (7) ∫ ∫ V (πœƒ) V (𝑝) 𝐿 (𝑑 ; πœƒ, 𝑝) π‘‘πœƒπ‘‘π‘ 0 0 1 2 𝑖 and the survival function: exp 𝑝 (3) Due to the complex nature of the posterior distribution𝑆 (𝑑; πœƒ, 𝑝) = 1 βˆ’ {1 βˆ’ (βˆ’πœƒπ‘‘)} . given in (7), Lindley approximation is employed in order to Let the 𝐺𝐸 distribution with the shape parameter and the estimate the unknown parameters.𝑝 scale parameter be denoted by . The Bayes estimator is considered under two loss func-πœƒ 𝐺𝐸(πœƒ, 𝑝) tions. Since in drawing conclusions about the survival or duration of a living organism, an overestimation could be 2.1. Maximum Likelihood Estimation. Since (𝑑 , . . . , 𝑑 ) is the 1 𝑛 more detrimental to underestimation or vice versa, we have set of 𝑛 random lifetimes from the generalised exponential considered both asymmetric (general entropy) loss function distribution with parameters πœƒ and 𝑝. and symmetric (squared error) loss function. The likelihood function is 𝑛 π‘βˆ’1 2.2.1. Lindley Approximation. Lindley [19] proposed an 𝐿 (𝑑 ; πœƒ, 𝑝) = ∏[π‘πœƒ{1 βˆ’ exp (βˆ’πœƒπ‘‘)} exp (βˆ’πœƒπ‘‘)] . 𝑖 (4) approximation for a ratio of integral of the form 𝑖=1 To obtain the equations for the unknown parameters, we take the log of (4) and partially different with respect to the βˆ«πœ” (𝛼) exp {β„“ (𝛼)} 𝑑𝛼 , (8) unknown parameters. By employing an iterative procedure ∫ V (𝛼) exp {β„“ (𝛼)} 𝑑𝛼 such as Newton-Raphson, the estimates of the scale and shape parameters can easily be obtained. Readers can refer where β„“(𝛼) is the log likelihood and πœ”(𝛼), V(𝛼) are arbitrary to Kundu and Gupta [7] for details. functions of 𝛼, V(𝛼) is the prior distribution for 𝛼, and πœ”(𝛼) = Therefore, the survival function can be obtained as 𝑒(𝛼) β‹… V(𝛼) with 𝑒(𝛼) being some function of interest as seen in (8). 𝑝 Μ‚ 𝑆 (𝑑) = 1 βˆ’ {1 βˆ’ exp (βˆ’πœƒπ‘‘)} , (5) The posterior expectation according to Sinha [9] is where Μ‚(πœƒ) and (𝑝) are the maximum likelihood estimates of the parameters. ∫ 𝑒 (𝛼) exp {β„“ (𝛼) + 𝜌 (𝛼)} 𝑑𝛼𝐸 {𝑒 (𝛼) | π‘₯} = , (9) ∫ exp {β„“ (𝛼) + 𝜌 (𝛼)} 𝑑𝛼 2.2. Bayesian Inference. In Bayesian analysis, the parameter of interest is always considered to be a random variable with where 𝜌(𝛼) = log{V(𝛼)} and β„“(𝛼) represent the log-likelihood a prior distribution. The prior distribution is the distribution function. Considering the Bayesian estimator under the of the parameter before any data is observed.The selection of squared error loss function, which is the posterior mean, the Journal of Probability and Statistics 3 posterior expectation can be approximated asymptotically 2.2.2. General Entropy Loss Function. The general entropy with respect to the two parameters by (10): loss function is a generalisation of the entropy loss function. The Bayes estimator ?Μ‚? of 𝛼 under the general entropy loss 𝐡𝐺 1 is Μ‚ ?Μ‚? = 𝑒 (πœƒ, 𝑝) + [(𝑒 𝜎 ) + (𝑒 𝜎 )] + 𝑒 𝜌 𝜎 11 11 22 22 1 1 11 2 (10) βˆ’1/π‘˜βˆ’π‘˜?Μ‚? = [𝐸 (𝛼 )] , (15) 𝐡𝐺 𝛼 1 2 2 + 𝑒 𝜌 𝜎 + [(β„“ 𝑒 𝜎 ) + (β„“ 𝑒 𝜎 )] , 2 2 22 30 1 11 03 2 22 2 provided𝐸 (β‹…) exists and is finite.The Bayes estimator for this 𝛼 loss function with respect to the parameters and the survival πœ•π‘’ 𝑒 = πœƒ, 𝑒 = = 1, 𝑒 = 0, 1 11 function are πœ•πœƒ βˆ’π‘˜ βˆ’π‘˜ πœ•π‘’ 𝐸 {[(πœƒ) , (𝑝) ]} 𝑒 = 𝑝, 𝑒 = = 1, 𝑒 = 0, 2 22 πœ•π‘ βˆ’π‘˜ βˆ’π‘˜ (11) βˆ¬π‘’[(πœƒ) , (𝑝) ] V (πœƒ) V (𝑝) 𝐿 (𝑑 ; πœƒ, 𝑝) π‘‘πœƒ 𝑑𝑝 𝜌 = ln V 1 2 𝑖(πœƒ) + ln V (𝑝) , 1 2 = , ∬V (πœƒ) V (𝑝) 𝐿 (𝑑 ; πœƒ, 𝑝) π‘‘πœƒ 𝑑𝑝 1 2 𝑖 𝑏 βˆ’ 1 𝑑 βˆ’ 1 𝜌 = βˆ’ π‘Ž, 𝜌 = βˆ’ 𝑐, 1 2 βˆ’π‘˜ πœƒ 𝑝 𝐸 {[(𝑆) ]} βˆ’1 βˆ’1 βˆ’π‘˜ 𝜎 = (βˆ’β„“ ) , 𝜎 = (βˆ’β„“ ) . 𝑝 11 20 22 02 βˆ¬π‘’[1 βˆ’ {1 βˆ’ exp (βˆ’πœƒπ‘‘)} ] V (πœƒ)V (𝑝) 𝐿 (𝑑 ; πœƒ, 𝑝) π‘‘πœƒ 𝑑𝑝 1 2 𝑖 = . The second and third derivatives with respect to the scale and ∬V (πœƒ) V (𝑝) 𝐿 (𝑑 ; πœƒ, 𝑝) π‘‘πœƒ 𝑑𝑝1 2 𝑖 shape parameters are (16) 2 exp A similar Lindley approach is used for the general entropy𝑛𝑛 (𝑝 βˆ’ 1) [(𝑑 ) (βˆ’πœƒπ‘‘ )]𝑖 𝑖 loss function as in the squared error loss function, with β„“ = βˆ’ βˆ’βˆ‘ 20 2 πœƒ (1 βˆ’ exp (βˆ’πœƒπ‘‘ )) 𝑖=1 𝑖 βˆ’π‘˜ πœ•π‘’ βˆ’π‘˜βˆ’1 𝑒 = (πœƒ) , 𝑒 = = βˆ’π‘˜(πœƒ) , 2 2 1 𝑛 (𝑝 βˆ’ 1) [(𝑑 ) (exp (βˆ’πœƒπ‘‘ )) ] πœ•πœƒ 𝑖 𝑖 βˆ’βˆ‘ , 2 2 𝑖=1 (1 βˆ’ exp (βˆ’πœƒπ‘‘ )) πœ• 𝑒𝑖 2 βˆ’π‘˜βˆ’2 𝑒 = = βˆ’ (βˆ’π‘˜ βˆ’ π‘˜) (πœƒ) , 11 2 πœ•(πœƒ) 3 𝑛 2𝑛 (𝑝 βˆ’ 1) [(𝑑 ) exp (βˆ’πœƒπ‘‘ )]𝑖 𝑖 (17) β„“ = +βˆ‘ 30 3 exp βˆ’π‘˜ πœ•π‘’ βˆ’π‘˜βˆ’1πœƒ (1 βˆ’ (βˆ’πœƒπ‘‘ )) 𝑖 𝑒 = (𝑝) , 𝑒 = = βˆ’π‘˜(𝑝) ,𝑖=1 2 πœ•π‘ 3 2 𝑛 3 (𝑝 βˆ’ 1) [(𝑑 ) (exp (βˆ’πœƒπ‘‘ )) ] (12) 2 𝑖 𝑖 πœ• 𝑒 2 βˆ’π‘˜βˆ’2 +βˆ‘ 2 𝑒 = = βˆ’ (βˆ’π‘˜ βˆ’ π‘˜) (𝑝) . 𝑖=1 (1 βˆ’ exp 22 2(βˆ’πœƒπ‘‘ ))𝑖 πœ•(𝑝) 3 3 𝑛 2 (𝑝 βˆ’ 1) [(𝑑 ) (exp (βˆ’πœƒπ‘‘ )) ] 𝑖 𝑖 For the general entropy loss function, the posterior expec- +βˆ‘ , exp 3 tation according to Lindley [19] can be approximated by𝑖=1 (1 βˆ’ (βˆ’πœƒπ‘‘ ))𝑖 using (18). The survival function is similarly obtained by 𝑛 substituting (14) into (18): β„“ = βˆ’ , 02 2 𝑝 1 Μ‚ ?Μ‚? = {𝑒 (πœƒ, 𝑝) + [(𝑒 𝜎 ) + (𝑒 𝜎 )] 11 11 22 22 2𝑛 2 β„“ = . 03 3 𝑝 + 𝑒 𝜌 𝜎 + 𝑒 𝜌 𝜎 1 1 11 2 2 22 (18) To estimate the survival function, under the squared error βˆ’1/π‘˜ 1 loss function, we let 2 2+ [(β„“ 𝑒 𝜎 ) + (β„“ 𝑒 𝜎 )]} .30 1 11 03 2 22 2 𝑒 (𝑆) = 1 βˆ’ {1 βˆ’ exp 𝑝(βˆ’πœƒπ‘‘)} , (13) 3. Real Data Analysis where 3.1. Example 1. We analyse two data sets in this section which we have considered to be relatively small and moderate for 2 πœ•π‘’ (𝑆) πœ• 𝑒 (𝑆) illustration and comparison purposes. The data is obtained 𝑒 = , 𝑒 = , 1 11 2 πœ•πœƒ πœ•πœƒ from Lawless [20]. The data represent survival times for two (14) groups of laboratory mice, all of which were exposed to a 2 πœ•π‘’ (𝑆) πœ• 𝑒 (𝑆) 𝑒 = , 𝑒 = . fixed dose of radiation at an age of 5 to 6. The first group of 2 22 2 πœ•π‘ πœ•π‘ mice lived in a conventional lab environment, and the second 4 Journal of Probability and Statistics group was kept in a germ-free environment. The cause of Table 1: Average parameters estimates and their corresponding death for each mouse was assigned after autopsy to be one of standard errors (𝑛 = 72). three things: thymic lymphoma (𝐢 ), reticulum cell sarcoma 1 (𝐢 ), or other causes (𝐢 ). The mice all died by the end of the par MLE BS BG (π‘˜ = 0.6) BG (π‘˜ = βˆ’0.6) 2 3 experiment, so there is no censoring. For the purpose of our 0.016962 0.016958 0.016773 0.016911Μ‚πœƒ study, we have considered the first set of data from the control (0.001999) (0.001998) (0.001977) (0.001993) groupwhich seem relatively small with 𝑛 = 22 and the second 2.474104 2.474104 2.446856 2.467235𝑝 which seem relatively moderate from the third set of data of (0.491576) (0.491576) (0.488365) (0.490766) the germ-free group with 𝑛 = 38. Using the iterative procedure suggested from the begin- Table 2: Standard errors of the survival function. ning of this paper, theMLEs of Μ‚πœƒ and 𝑝 for the relatively small Est 𝑛 = 22 𝑛 = 38 𝑛 = 72 samples are 0.015024 and 36.935530, respectively, with their MLE 0.517012 0.467909 0.529970 corresponding standard errors as 0.003203 and 8.374682. BS 0.543902 0.471755 0.530072 The relatively moderate samples have Μ‚πœƒ = 0.003735 and BG (π‘˜ = 0.6) 0.556815 0.480653 0.531440 𝑝 = 8.624091, with their corresponding standard errors as 0.000606 and 1.999012. BG (π‘˜ = βˆ’0.6) 0.547363 0.474025 0.530418 Since we do not have any prior information on the hyper- parameters, we assume π‘Ž = 𝑏 = 𝑐 = 𝑑 = 0.0001. This makes regimen 6.6 corresponds to 4.4 Γ— 106 bacillary units per the priors proper on Μ‚πœƒ and𝑝 and the corresponding posteriors 0.5mL (log 4.4 Γ— 106 = 6.6). Corresponding to regimen 6.6, also proper. there were 72 observations. When we compute the Bayes estimators with squared Observing from Table 1, it is evident that the estima- error loss of Μ‚πœƒ and 𝑝 under the relatively small sample tor with the smallest parameter estimate and having a sizes, the following parameters estimates and standard errors corresponding smaller standard error is Bayesian with the are obtained, respectively, 0.014605, 36.935530 and 0.003114, generalised entropy loss function. This occurred for both 8.374682. For the moderate samples, we have Μ‚ parameters with a positive loss parameter, that is, 0.6.πœƒ = 0.003716 with a standard error of 0.000603. The with a The importance of the survival function cannot be𝑝 = 8.624091 standard error of 1.999012. ignored; therefore, the correctness of its estimate is very Computing the Bayes estimates of Μ‚ and and their crucial to both biological and medical studies. As clearlyπœƒ 𝑝 corresponding standard errors under the general entropy presented in Table 2, the estimator with the smallest stan- loss functions with the loss parameter being , we have dard error under all the samples is the classical maximum0.6 0.014270, 35.630480 and 0.003042, 8.096443. With the loss likelihood estimator; therefore, MLE is preferred as a better parameter being βˆ’0.6, we have 0.014517, 36.600361 and estimator to the others. 0.003095, 8.303224, respectively. Comparing all the estimators, it is clear from the resultsthat Bayes estimator under general entropy loss functionwith Considering the Bayes estimates on the relatively mod- the loss parameter of +0.6 has the smallest standard error erate samples of Μ‚πœƒ and 𝑝 and their corresponding standard and estimate for both the shape parameter (𝑝) and the scale errors under the general entropy loss functions with the parameter Μ‚(πœƒ). loss parameter being 0.6, we have 0.003646, 8.445543 and 0.000591, 1.970048. With the loss parameter being βˆ’0.6, we have 0.003698, 8.578749 and 0.00600, 1.991657, respectively. 4. Simulation Study Observably, the Bayes estimator under squared error Since it is difficult to compare the performance of the estima- loss for the shape parameter (𝑝) has the same estimate and tors theoretically and also to validate the real data employed standard error as compared to that of the classical maximum in this paper, we have performed extensive simulations to likelihood estimator but with the scale parameter Μ‚(πœƒ), and compare the estimators through mean squared errors and Bayes under squared error has a smaller standard error in absolute biases by employing different sample sizes with juxtaposition to MLE. different parameter values. We have considered in the simulation study a sample 3.2. Example 2. In this section, we analyse another real data size of 𝑛 = 25, 50, and 100, which is representative of set which we have considered to be relatively large and small, moderate, and large data sets.The following steps were complete for illustrative purpose. The data originates from employed to generate the data. Bjerkedal [21]. The data represents the survival times of The generation of 𝐺𝐸(πœƒ, 𝑝) is simple as stated in Gupta guinea pigs injected with different doses of tubercle bacilli. and Kundu [8]. If π‘ˆ follows a uniform distribution in the It is known that guinea pigs have high susceptibility to that interval [0, 1], then ln (1/𝑝)π‘Œ = (βˆ’ (1 βˆ’π‘ˆ )/πœƒ) follows 𝐺𝐸(πœƒ, 𝑝). of human tuberculosis, and this informs our decision to use Consequently, with a very good uniform random number the data in our study. Here, our primary concern is with the generator, the generation of𝐺𝐸 randomdeviate is immediate. animals in the same cage that were under the same regimen. A lifetime 𝑇 is generated from the sample sizes indicated The regimennumber is the common logarithmof the number above from the 𝐺𝐸 distribution which represents failure of of bacillary units in 0.5mL of challenge solution; that is, the product or unit. The values of the assumed actual shape Journal of Probability and Statistics 5 Table 3: Average mean squared errors and absolute biases of Μ‚(πœƒ). 𝑛 𝑝 = 0.8 𝑝 = 1.6 𝑝 = 2.2 ML 0.117259 (0.255219) 0.077502 (0.206616) 0.062024 (0.190155) 25 BS 0.117108 (0.255095) 0.076553 (0.205513) 0.060763 (0.188471) BG (π‘˜ = 0.6) 0.118176 (0.251456) 0.075759 (0.208706) 0.060024 (0.185077) BG (π‘˜ = βˆ’0.6) 0.108193 (0.246358) 0.075167 (0.203752) 0.059216 (0.179142) ML 0.048628 (0.170036) 0.031950 (0.136181) 0.027519 (0.130563) 50 BS 0.048623 (0.170024) 0.031869 (0.136036) 0.027383 (0.130287) BG (π‘˜ = 0.6) 0.047058 (0.163792) 0.032612 (0.139630) 0.027723 (0.130121) BG (π‘˜ = βˆ’0.6) 0.045906 (0.163408) 0.033207 (0.159536) 0.027145 (0.127761) ML 0.023911 (0.119557) 0.015292 (0.097709) 0.012631 (0.088584) 100 BS 0.023910 (0.119556) 0.015283 (0.097685) 0.012616 (0.088533) BG (π‘˜ = 0.6) 0.021514 (0.115652) 0.014627 (0.094909) 0.012650 (0.088935) BG (π‘˜ = βˆ’0.6) 0.020102 (0.113236) 0.014052 (0.092336) 0.013054 (0.089552) Table 4: Average mean squared errors and absolute biases of (𝑝). 𝑛 𝑝 = 0.8 𝑝 = 1.6 𝑝 = 2.2 ML 0.076755 (0.191499) 0.490147 (0.476051) 0.743582 (0.661779) 25 BS 0.076755 (0.191499) 0.490147 (0.476051) 0.743582 (0.661779) BG (π‘˜ = 0.6) 0.065099 (0.188381) 0.553240 (0.462777) 0.910514 (0.683827) BG (π‘˜ = βˆ’0.6) 0.062020 (0.182668) 0.481165 (0.451018) 0.898282 (0.647155) ML 0.025391 (0.118187) 0.147209 (0.283414) 0.364344 (0.445543) 50 BS 0.025391 (0.118187) 0.147209 (0.283414) 0.364344 (0.445543) BG (π‘˜ = 0.6) 0.027027 (0.123510) 0.149940 (0.288091) 0.368843 (0.438096) BG (π‘˜ = βˆ’0.6) 0.026932 (0.126065) 0.159536 (0.294393) 0.364682 (0.435072) ML 0.011782 (0.085569) 0.060777 (0.186684) 0.130699 (0.278809) 100 BS 0.011782 (0.085569) 0.060777 (0.186684) 0.130699 (0.278809) BG (π‘˜ = 0.6) 0.012004 (0.084495) 0.058056 (0.187923) 0.141918 (0.286061) BG (π‘˜ = βˆ’0.6) 0.011583 (0.084047) 0.059122 (0.188084) 0.126457 (0.277587) parameter (𝑝) of the𝐺𝐸 distribution were taken to be 0.8, 1.6, 5. Results and Discussion and 2.2. The scale parameter (πœƒ) was considered throughout the paper to be one (4). In this study, our objective is to obtain the estimates of We observed that the parameter estimates under the the parameters and to observe the performance of the classical maximum likelihood method could not be obtained methods used for estimation. To examine the estimates of the in close form, and we therefore employed Newton-Raphson parameters, we obtain the absolute biases and mean squared iterative approach via the Hessian matrix. This can simply errors of the estimates under differentmethods of estimation. be implemented in the programming language with the From Table 3, it is very clear that the most dominant𝑅 packagemaxLik. estimator that had the smallest mean squared errors vis-π‘Ž- To compute the Bayes estimates, an assumption is made vis the absolute biases for the scale parameter Μ‚(πœƒ) is Bayesian such that and take, respectively, Gamma and under general entropy loss function. This is followed closelyπœƒ 𝑝 (π‘Ž, 𝑏) Gamma(𝑐, 𝑑)priors.We set the hyperparameters to be 0.0001, by Bayes under squared error loss function. What has been that is, , in order to obtain proper observed again is that, as the sample size increases, the meanπ‘Ž = 𝑏 = 𝑐 = 𝑑 = 0.0001 priors. This approach was suggested by Press and Tanur [22]. squared error of all the estimators decrease unswervingly. Note that at this point the posterior distribution is also proper. This is simply an indication of how good and reliable the The values of the loss parameter for the general entropy estimators are. loss function are π‘˜ = Β±0.6, which can be extended for other When we consider Table 4, which contains the mean values of the loss parameter. Readers are referred to Calabria squared errors and the absolute biases of the estimated shape and Pulcini [23] for the choice of the loss parameter values. parameter (𝑝), we noticed that the mean squared errors and These were iterated 𝑅 = 1000 times. The mean squared error the absolute biases of the two estimators, that is, maximum and the absolute bias values are determined and presented likelihood and Bayes under squared error loss function, below for the purpose of comparison. have the same values for the estimated shape parameter. 6 Journal of Probability and Statistics Table 5: Average mean squared errors and absolute biases of the survival function 𝑆(𝑑) at 𝑇 = 25, 50, and 100. 𝑝 = 0.8 𝑝 = 1.6 𝑝 = 2.2 ML 0.075982 (0.042809) 0.082151 (0.045894) 0.078309 (0.044532) BS 0.075986 (0.042836) 0.082597 (0.046023) 0.078476 (0.044688) BG (π‘˜ = 0.6) 0.080981 (0.045294) 0.082265 (0.045688) 0.086399 (0.046789) BG (π‘˜ = βˆ’0.6) 0.077210 (0.042326) 0.083746 (0.044032) 0.083479 (0.043191) ML 0.075706 (0.031059) 0.080699 (0.031094) 0.078253 (0.031479) BS 0.075725 (0.031063) 0.080711 (0.031107) 0.078272 (0.031507) BG (π‘˜ = 0.6) 0.079284 (0.031347) 0.082140 (0.032464) 0.078831 (0.031191) BG (π‘˜ = βˆ’0.6) 0.075028 (0.031119) 0.078181 (0.031753) 0.080356 (0.032732) ML 0.073820 (0.022000) 0.075437 (0.022445) 0.074257 (0.022533) BS 0.073923 (0.022000) 0.075437 (0.022447) 0.077209 (0.022535) BG (π‘˜ = 0.6) 0.072898 (0.021704) 0.078784 (0.022582) 0.076435 (0.022308) BG (π‘˜ = βˆ’0.6) 0.070723 (0.022449) 0.075552 (0.022798) 0.076741 (0.022954) ML: maximum likelihood, BG: general entropy, and BS: squared error. This is expected in that the priors used for the Bayesian estimating it. For the survival function, maximum likelihood analysis are noninformative. With regards to the survival performed better than the other estimators. function, Bayes estimator under the general entropy loss function gives a minimum bias with relatively small samples. Maximum likelihood estimator is slightly ahead of the other References estimators with respect to the mean squared error. The bold [1] T. Fagerström, P. Jagers, P. Schuster, and E. Szathmary, β€œBiolo- numbers indicate the smallest MSE and minimum biases of gists put on mathematical glasses,” Science, vol. 274, no. 5295, the estimated parameters and their corresponding estimators. pp. 2039–2040, 1996. From Table 5, we observed that the classical maximum [2] E. Limpert, W. A. Stahel, and M. Abbt, β€œLog-normal distribu- likelihood estimator (MLE) as compared to Bayes estimators tions across the sciences: keys and clues,” BioScience, vol. 51, no. under squared error and general entropy loss function had 5, pp. 341–352, 2001. the smallest mean squared error values as well as minimum [3] R. D. Gupta and D. Kundu, β€œGeneralized exponential distribu- absolute bias for the estimated survival function of the gener- tions,” Australian & New Zealand Journal of Statistics, vol. 41, alised exponential distribution. This implies that maximum no. 2, pp. 173–188, 1999. likelihood estimator may be preferred when estimating the [4] M. Z. Raqab, β€œInferences for generalized exponential distribu- survival function to the others. tion based on record statistics,” Journal of Statistical Planning and Inference, vol. 104, no. 2, pp. 339–350, 2002. [5] M. Z. Raqab and M. Ahsanullah, β€œEstimation of the location 6. Conclusion and scale parameters of generalized exponential distribution In this paper, we have considered the Bayes estimation based on order statistics,” Journal of Statistical Computation and of the unknown parameters of the generalized exponential Simulation, vol. 69, no. 2, pp. 109–123, 2001. distribution. We have also assumed a gamma prior on both [6] G. Zheng, β€œOn the Fisher informationmatrix in type II censored parameters, and we provide the Bayes estimators under data from the exponentiated exponential family,” BiometricalJournal, vol. 44, no. 3, pp. 353–357, 2002. the assumptions of squared error and general entropy loss functions. We observed that the Bayes estimators cannot be [7] D. Kundu and R. D. Gupta, β€œGeneralized exponential distri- obtained in explicit forms, due to the complex nature of the bution: Bayesian estimations,” Computational Statistics & DataAnalysis, vol. 52, no. 4, pp. 1873–1883, 2008. posterior distribution of which Bayes inference is drawn. Therefore, Lindley’s numerical approximations procedure [8] R. D. Gupta and D. Kundu, β€œGeneralized exponential distri-bution: different method of estimations,” Journal of Statistical is used. It is also observed that the parameter estimates Computation and Simulation, vol. 69, no. 4, pp. 315–337, 2001. under the classical maximum likelihood method could not [9] S. K. Sinha, β€œBayes estimation of the reliability function and be obtained in close form; we therefore employed Newton- hazard rate of a weibull failure time distribution,” Trabajos de Raphson iterative approach via the Hessian matrix. Estadistica, vol. 1, no. 2, pp. 47–56, 1986. From the results and discussions above, it is evident that [10] A. A. Abdel-Wahid and A. Winterbottom, β€œApproximate the Bayesian estimator under general entropy loss function Bayesian estimates for the Weibull reliability function and performed quiet better than Bayes under squared error loss hazard rate from censored data,” Journal of Statistical Planning function and that of maximum likelihood estimator for and Inference, vol. 16, no. 3, pp. 277–283, 1987. estimating the scale parameter Μ‚πœƒ, with bothMSE and absolute [11] C. B. Guure and N. A. Ibrahim, β€œBayesian analysis of the bias. In the case of the shape parameter (𝑝), the Bayesian survival function and failure rate of Weibull distribution with estimator under the squared error loss function and themax- censored data,”Mathematical Problems in Engineering, vol. 2012, imum likelihood estimator are both almost tantamount in Article ID 329489, 18 pages, 2012. Journal of Probability and Statistics 7 [12] S. R. Huang and S. J. Wu, β€œBayesian estimation and prediction for Weibull model with progressive censoring,” Journal of Statistical Computation and Simulation, vol. 82, no. 11, pp. 1607– 1620, 2012. [13] C. B. Guure, N. A. Ibrahim, and M. Bakri Adam, β€œBayesian inference of the Weibull model based on interval-censored survival data,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 849520, 10 pages, 2013. [14] A. Zellner, β€œBayesian estimation and prediction using asymmet- ric loss functions,” Journal of the American Statistical Associa- tion, vol. 81, no. 394, pp. 446–451, 1986. [15] C. B. Guure, N. A. Ibrahim, M. B. Adam, S. Bosomprah, and A. O. M. Ahmed, β€œBayesian parameter and reliability estimate of Weibull failure time distribution,” Bulletin of the Malaysian Mathematical Sciences Society. In press. [16] F. M. Al-Aboud, β€œBayesian estimations for the extreme value distribution using progressive censored data and asymmetric loss,” International Mathematical Forum, vol. 4, no. 33–36, pp. 1603–1622, 2009. [17] F. M. Al-Athari, β€œParameter estimation for the double-pareto distribution,” Journal of Mathematics and Statistics, vol. 7, no. 4, pp. 289–294, 2011. [18] B. N. Pandey, N. Dwividi, and B. Pulastya, β€œComparison between Bayesian and maximum likelihood estimation of the scale parameter inweibull distributionwith known shape under linex loss function,” Journal of Scientometric Research, vol. 55, pp. 163–172, 2011. [19] D. V. Lindley, β€œApproximate Bayesian methods,” Trabajos Estadist, vol. 31, pp. 223–245, 1980. [20] J. F. Lawless, Statistical Models and Methods for Lifetime Data, Wiley, New York, NY, USA, 2003. [21] T. Bjerkedal, β€œAcquisition of resistance in Guinea pigs infected with different doses of virulent tubercle bacilli,” American Journal of Epidemiology, vol. 72, no. 1, pp. 130–148, 1960. [22] S. J. Press and J. M. Tanur, The Subjectivity of Scientists and the Bayesian Approach, Wiley, New York, NY, USA, 2001. [23] R. Calabria andG. Pulcini, β€œPoint estimation under asymmetric loss functions for left-truncated exponential samples,” Commu- nications in Statistics, vol. 25, no. 3, pp. 585–600, 1996. Advances in Advances in Journal of Journal of Operations Research Decision Sciences Applied Mathematics Algebra Probability and Statistics Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 The Scientific International Journal of World Journal Differential Equations Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Mathematical Physics Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in Complex Analysis Mathematics in Engineering Applied Analysis Nature and Society Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 International Journal of Journal of Mathematics and Mathematical Discrete Mathematics Sciences Journal of International Journal of Journal of Function Spaces Stochastic Analysis Optimization Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014 Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014