้ข˜็›ฎ
้ข˜็›ฎ
ๅ•้กน้€‰ๆ‹ฉ้ข˜

Consider a population with mean ๐œ‡ and variance ๐œŽ 2 < โˆž . You are comparing two estimators ๐œ‡ ฬ‚ 1 and ๐œ‡ ฬ‚ 2 for the mean of the population ๐œ‡ , with the following expected values and variances ๐ธ ( ๐œ‡ ฬ‚ 1 ) = ๐œ‡ ; ๐‘‰ ( ๐œ‡ ฬ‚ 1 ) = 9 ; ๐ธ ( ๐œ‡ ฬ‚ 1 ) = ๐œ‡ + 1 ; ๐‘‰ ( ๐œ‡ ฬ‚ 2 ) = 1 . We also know that the covariance between the two estimators is ๐ถ ๐‘‚ ๐‘‰ ( ๐œ‡ ฬ‚ 1 , ๐œ‡ ฬ‚ 2 ) = โˆ’ 2 . Now consider a new estimator that combines the two previous ones ๐œ‡ ฬ‚ 3 = 1 4 ๐œ‡ ฬ‚ 1 + 3 4 ๐œ‡ ฬ‚ 2 . Then the variance ๐‘‰ ( ๐œ‡ ฬ‚ 3 ) of ๐œ‡ ฬ‚ 3 is

้€‰้กน
A.3
B.๐œŽ 2 -2
C.1.125
D.๐œŽ 2 + 1.125
E.2.25
F.0.375
ๆŸฅ็œ‹่งฃๆž

ๆŸฅ็œ‹่งฃๆž

ๆ ‡ๅ‡†็ญ”ๆกˆ
Please login to view
ๆ€่ทฏๅˆ†ๆž
We start by identifying the given quantities for the two estimators: Var(mu1_hat) = 9, Var(mu2_hat) = 1, Cov(mu1_hat, mu2_hat) = -2, and the new estimator mu3_hat = (1/4) mu1_hat + (3/4) mu2_hat. The variance of a linear combination is Var(aX + bY) = a^2 Var(X) + b^2 Var(Y) + 2ab Cov(X, Y). Applying this: - The weight for mu1_hat is a = 1/4, so a^2 Var(mu1_hat) = (1/16) * 9 = 9/16 = 0.5625. - The wei......Login to view full explanation

็™ปๅฝ•ๅณๅฏๆŸฅ็œ‹ๅฎŒๆ•ด็ญ”ๆกˆ

ๆˆ‘ไปฌๆ”ถๅฝ•ไบ†ๅ…จ็ƒ่ถ…50000้“่€ƒ่ฏ•ๅŽŸ้ข˜ไธŽ่ฏฆ็ป†่งฃๆž,็Žฐๅœจ็™ปๅฝ•,็ซ‹ๅณ่Žทๅพ—็ญ”ๆกˆใ€‚

็ฑปไผผ้—ฎ้ข˜

Consider a population with mean ฮผ and variance ฯƒ2<โˆž. Assume the following two estimators ห† ฮผ 1 and ห† ฮผ 2 for the mean of the population ฮผ, with the following expected values and variances E( ห† ฮผ 1)=ฮผ;V( ห† ฮผ 1)=5; E( ห† ฮผ 1)=ฮผ+1;V( ห† ฮผ 2)=2. We also know that the covariance between the two estimators is COV( ห† ฮผ 1, ห† ฮผ 2)=โˆ’1. Now consider a new estimator that combines the two previous ones ห† ฮผ 3= 1 3 ห† ฮผ 1+ 2 3 ห† ฮผ 2. Then the variance V( ห† ฮผ 3) of ห† ฮผ 3 is

Consider the likelihood of an i.i.d. sample from a Bernoulli population with parameter ๐‘ ๐ฟ ( ๐‘ฅ 1 , . . . , ๐‘ฅ ๐‘‡ ) = โˆ ๐‘ก = 1 ๐‘‡ ๐‘ ๐‘ฅ ๐‘ก ( 1 โˆ’ ๐‘ ) 1 โˆ’ ๐‘ฅ ๐‘ก . If you estimate the parameter ๐‘ using a Maximum Likelihood estimator, you obtain the point estimate ๐‘ ฬ‚ = 1 ๐‘‡ โˆ‘ ๐‘ก = 1 ๐‘‡ ๐‘ฅ ๐‘ก , which corresponds to the sample mean. We know that for a Bernoulli random variable the expected value and the variance are ๐”ผ ( ๐‘ฅ ๐‘ก ) = ๐‘ , ๐• ( ๐‘ฅ ๐‘ก ) = ๐‘ ( 1 โˆ’ ๐‘ ) . Using this information, what is the variance of the estimator ๐• ( ๐‘ ฬ‚ ) ?

ไฝ็ฝฎ2็š„้—ฎ้ข˜ The variance of the estimator for E[Y]E\left\lbrack Y\right\rbrack at a given point x0x_0 decreases as the sample size increases.The variance of the estimator for E[Y]E\left\lbrack Y\right\rbrack at a given point x0x_0 decreases as the sample size increases.TrueFalse้ข˜็›ฎ่งฃๆž

Consider a population with mean ฮผ and variance ฯƒ2<โˆž. Assume the following two estimators ห† ฮผ 1 and ห† ฮผ 2 for the mean of the population ฮผ, with the following expected values and variances E( ห† ฮผ 1)=ฮผ;V( ห† ฮผ 1)=5; E( ห† ฮผ 1)=ฮผ+1;V( ห† ฮผ 2)=2. We also know that the covariance between the two estimators is COV( ห† ฮผ 1, ห† ฮผ 2)=โˆ’1. Now consider a new estimator that combines the two previous ones ห† ฮผ 3= 1 3 ห† ฮผ 1+ 2 3 ห† ฮผ 2. Then the variance V( ห† ฮผ 3) of ห† ฮผ 3 is

ๆ›ดๅคš็•™ๅญฆ็”Ÿๅฎž็”จๅทฅๅ…ท

ๅŠ ๅ…ฅๆˆ‘ไปฌ๏ผŒ็ซ‹ๅณ่งฃ้” ๆตท้‡็œŸ้ข˜ ไธŽ ็‹ฌๅฎถ่งฃๆž๏ผŒ่ฎฉๅคไน ๅฟซไบบไธ€ๆญฅ๏ผ