้ข˜็›ฎ
้ข˜็›ฎ

BU.232.630.W4.SP25 sample_quiz_3

ๅ•้กน้€‰ๆ‹ฉ้ข˜

Consider the likelihood of an i.i.d. sample from a Bernoulli population with parameter ๐‘ ๐ฟ ( ๐‘ฅ 1 , . . . , ๐‘ฅ ๐‘‡ ) = โˆ ๐‘ก = 1 ๐‘‡ ๐‘ ๐‘ฅ ๐‘ก ( 1 โˆ’ ๐‘ ) 1 โˆ’ ๐‘ฅ ๐‘ก . If you estimate the parameter ๐‘ using a Maximum Likelihood estimator, you obtain the point estimate ๐‘ ฬ‚ = 1 ๐‘‡ โˆ‘ ๐‘ก = 1 ๐‘‡ ๐‘ฅ ๐‘ก , which corresponds to the sample mean. We know that for a Bernoulli random variable the expected value and the variance are ๐”ผ ( ๐‘ฅ ๐‘ก ) = ๐‘ , ๐• ( ๐‘ฅ ๐‘ก ) = ๐‘ ( 1 โˆ’ ๐‘ ) . Using this information, what is the variance of the estimator ๐• ( ๐‘ ฬ‚ ) ?

้€‰้กน
A.The variance of ๐‘ ฬ‚ is ๐• ( ๐‘ ฬ‚ ) = ๐‘ 2
B.The variance of ๐‘ ฬ‚ is ๐• ( ๐‘ ฬ‚ ) = ๐‘ ( 1 โˆ’ ๐‘ ) ๐‘‡
C.The variance of ๐‘ ฬ‚ is ๐• ( ๐‘ ฬ‚ ) = ๐‘ ( 1 โˆ’ ๐‘ )
D.The variance of ๐‘ ฬ‚ is ๐• ( ๐‘ ฬ‚ ) = ๐”ผ ( ๐‘ 2 )
E.All the answers are incorrect.
ๆŸฅ็œ‹่งฃๆž

ๆŸฅ็œ‹่งฃๆž

ๆ ‡ๅ‡†็ญ”ๆกˆ
Please login to view
ๆ€่ทฏๅˆ†ๆž
We start by considering what we know about the Bernoulli(p) population and the estimator p-hat. The data x1, ..., xT are i.i.d. Bernoulli(p), and p-hat = (1/T) โˆ‘ xt is the sample mean. For a Bernoulli random variable, E[xt] = p and Var(xt) = p(1โˆ’p). Since p-hat is the average of T independent copies, its variance is Var(p-hat) = Var((1/T) โˆ‘ xt) = (1/T^2) โˆ‘ Var(xt) by independence, which simplifies to (1/T^2) ยท T ยท p(1โˆ’p) = p(1โˆ’p)/T. Now let's ......Login to view full explanation

็™ปๅฝ•ๅณๅฏๆŸฅ็œ‹ๅฎŒๆ•ด็ญ”ๆกˆ

ๆˆ‘ไปฌๆ”ถๅฝ•ไบ†ๅ…จ็ƒ่ถ…50000้“่€ƒ่ฏ•ๅŽŸ้ข˜ไธŽ่ฏฆ็ป†่งฃๆž,็Žฐๅœจ็™ปๅฝ•,็ซ‹ๅณ่Žทๅพ—็ญ”ๆกˆใ€‚

็ฑปไผผ้—ฎ้ข˜

Consider a population with mean ฮผ and variance ฯƒ2<โˆž. Assume the following two estimators ห† ฮผ 1 and ห† ฮผ 2 for the mean of the population ฮผ, with the following expected values and variances E( ห† ฮผ 1)=ฮผ;V( ห† ฮผ 1)=5; E( ห† ฮผ 1)=ฮผ+1;V( ห† ฮผ 2)=2. We also know that the covariance between the two estimators is COV( ห† ฮผ 1, ห† ฮผ 2)=โˆ’1. Now consider a new estimator that combines the two previous ones ห† ฮผ 3= 1 3 ห† ฮผ 1+ 2 3 ห† ฮผ 2. Then the variance V( ห† ฮผ 3) of ห† ฮผ 3 is

Consider a population with mean ๐œ‡ and variance ๐œŽ 2 < โˆž . You are comparing two estimators ๐œ‡ ฬ‚ 1 and ๐œ‡ ฬ‚ 2 for the mean of the population ๐œ‡ , with the following expected values and variances ๐ธ ( ๐œ‡ ฬ‚ 1 ) = ๐œ‡ ; ๐‘‰ ( ๐œ‡ ฬ‚ 1 ) = 9 ; ๐ธ ( ๐œ‡ ฬ‚ 1 ) = ๐œ‡ + 1 ; ๐‘‰ ( ๐œ‡ ฬ‚ 2 ) = 1 . We also know that the covariance between the two estimators is ๐ถ ๐‘‚ ๐‘‰ ( ๐œ‡ ฬ‚ 1 , ๐œ‡ ฬ‚ 2 ) = โˆ’ 2 . Now consider a new estimator that combines the two previous ones ๐œ‡ ฬ‚ 3 = 1 4 ๐œ‡ ฬ‚ 1 + 3 4 ๐œ‡ ฬ‚ 2 . Then the variance ๐‘‰ ( ๐œ‡ ฬ‚ 3 ) of ๐œ‡ ฬ‚ 3 is

ไฝ็ฝฎ2็š„้—ฎ้ข˜ The variance of the estimator for E[Y]E\left\lbrack Y\right\rbrack at a given point x0x_0 decreases as the sample size increases.The variance of the estimator for E[Y]E\left\lbrack Y\right\rbrack at a given point x0x_0 decreases as the sample size increases.TrueFalse้ข˜็›ฎ่งฃๆž

Consider a population with mean ฮผ and variance ฯƒ2<โˆž. Assume the following two estimators ห† ฮผ 1 and ห† ฮผ 2 for the mean of the population ฮผ, with the following expected values and variances E( ห† ฮผ 1)=ฮผ;V( ห† ฮผ 1)=5; E( ห† ฮผ 1)=ฮผ+1;V( ห† ฮผ 2)=2. We also know that the covariance between the two estimators is COV( ห† ฮผ 1, ห† ฮผ 2)=โˆ’1. Now consider a new estimator that combines the two previous ones ห† ฮผ 3= 1 3 ห† ฮผ 1+ 2 3 ห† ฮผ 2. Then the variance V( ห† ฮผ 3) of ห† ฮผ 3 is

ๆ›ดๅคš็•™ๅญฆ็”Ÿๅฎž็”จๅทฅๅ…ท

ๅŠ ๅ…ฅๆˆ‘ไปฌ๏ผŒ็ซ‹ๅณ่งฃ้” ๆตท้‡็œŸ้ข˜ ไธŽ ็‹ฌๅฎถ่งฃๆž๏ผŒ่ฎฉๅคไน ๅฟซไบบไธ€ๆญฅ๏ผ