Fundamentals of Statistics
Write your awesome label here.
Homework 11
We want to model the rate of infection with an infectious disease depending on the day after outbreak t. Denote the recorded number of outbreaks at day t by k_t.We are going to model the distribution of k_t as a Poisson distribution with a time-varying parameter λ_t, which is a common assumption when handling count data.First, recall the likelihood of a Poisson distributed random variable Y in terms of the parameter λ,P(Y=k)=e^(-λ) λ^k/k!Rewrite this in terms of an exponential family. In other words, write it in the formP(Y=k)=h(k)exp[η(λ)T(k)-B(λ)]Since this representation is only unique up to re-scaling by constants, take the convention that T(k)=k.


Homework 10
Suppose we have n observations (X_i,Y_i ), where Y_i∈R is the dependent variable, X_i∈R^p is the column p×1 vector of deterministic explanatory variables, and the relation between Y_i and X_i is given byY_i=X_i^T β+ϵ_i,□( ) i=1,…,nwhere ϵ_i are i.i.d. N(0,σ^2 ). As usual, let X denote the n×p matrix whose rows are X_i^T.Unless otherwise stated, assume X^T X=τI and that τ,σ^2 are known constants.Recall that under reasonable assumptions (which is certainly satisfied in linear regression with Gaussian noise), the Fisher Information of a parameter θ given a family of distributions P_θ can be computed via the following formula:I(θ)=-∑_(i=1)^n E[H_θ lnf(Y_i;θ)]where H_θ denotes the Hessian differentiation operator with respect to θ. (Recall the definition in lecture 9 ).In terms of X,σ^2, compute the Fisher I(β) information of β.


Midterm Exam 2
For x∈R and θ∈(0,1), define
f_θ (x)={■(θ^2&" if " -1≤x<0@1-θ^2&" if " 0≤x≤1@0&" otherwise " )┤
Let X_1,…,X_n be i.i.d. random variables with density f_θ, for some unknown θ∈(0,1).
Let a be the number of X_i which are negative (X_i<0) and b be the number of X_i which are non-negative ( X_i≥0 ). (Note that the total number of samples is n=a+b and be careful not to mix up the roles of a and b.)
What is the maximum likelihood estimator θ ˆ^MLE of θ ?
Let a be the number of X_i which are negative (X_i<0) and b be the number of X_i which are non-negative ( X_i≥0 ). (Note that the total number of samples is n=a+b and be careful not to mix up the roles of a and b.)

Homework 9
Practice with Priors
One side concept introduced in the second Bayesian lecture is the conjugate prior. Simply put, a prior distribution π(θ) is called conjugate to the data model, given by the likelihood function L(X_i∣θ), if the posterior distribution π(θ∣X_1,X_2,…,X_n ) is part of the same distribution family as the prior.
This problem will give you some more practice on computing posterior distributions, where we make use of the proportionality notation. It would be helpful to try to think of computations in forms that are reduced as much as possible, as this will help with intuition towards assessing whether a prior is conjugate.
This problem makes use the Gamma distribution (written as Gamma (k,θ) ) is a probability distribution with parameters k>0 and θ>0, has support on (0,∞), and whose density is given by f(x)=(x^(k-1) e^(-x/θ))/(Γ(k)θ^k ). Here, Γ(k)=∫_0^∞ t^(k-1) e^(-t) dt is the Euler Gamma function .
(a)
Suppose we have the prior π(λ)∼Exp(a) (where a>0, and conditional on λ, we have observations X_1,X_2,⋯,X_n ∼┴" i.i.d " Exp(λ). Compute the posterior distribution π(λ∣X_1,X_2,…,X_n ).
The posterior distribution for λ is a Gamma distribution. What are its parameters? Enter your answer in terms of a,n, and ∑_(i=1)^n X_i
(Enter Sigma_i(X_i) for ∑_(i=1)^n X_i. Do not worry if the parser does not render properly; the grader works independently. If you wish to have proper rendering, enclose Sigma_i(X_i) by brackets. )


Homework 8
Let X_1,…,X_n ∼┴" iid " X∼P for some unknown distribution P with continuous cdf F. Below we describe a χ^2 test for the null and alternative hypotheses
H_0:P∈{N(μ,σ^2 )}_(μ∈R,σ^2>0)
H_1:P∉{N(μ,σ^2 )}_(μ∈R,σ^2>0)
We divide the sample space into 5 disjoint subsets refered to as bins :
A_1=(-∞,-2),
A_2=(-2,-0.5),
A_3=(-0.5,0.5),
A_4=(0.5,2),
A_5=(2,∞)
Now, define discrete random variables Y_i as functions of X_i by
Y_i=k□( ) " if " X_i∈A_k.
For example, if X_i=0.1, then X_i∈A_3 and so Y_i=3. In other words, Y_i is the label of the bin that contains X_i.
By the definition above,
Y_1,…,Y_n ∼┴iid Y
and Y follows the multinomial distribution on {1,2,3,4,5} with (vector) parameter p=(■(p_1&p_2&p_3&p_4&p_5 ))∈Δ_5 where p_j denote the probability that Y=j.
Assume the following special case of the null hypothesis holds:
X_1,…,X_n ∼┴iid N(0,1)
What is the vector parameter p∈Δ_5 of the multinomial distribution followed by Y_i ? Fill in the first three entries p_1,p_2,p_3 below.
(Enter Phi(x) for the cdf Φ(x) of a standard normal distribution, e.g. type Phi(1) for Φ(1), or enter your answers accurate to 3 decimal places)
H_1:P∉{N(μ,σ^2 )}_(μ∈R,σ^2>0)
A_2=(-2,-0.5),
A_3=(-0.5,0.5),
A_4=(0.5,2),
A_5=(2,∞)


Homework 7
Let X_1,…,X_n ∼iid X∼N(μ_1,σ_1^2 ). Consider the null and alternative hypotheses
H_0:μ_1=5
H_1:μ_1≠5Assume that μ_1 is not known, but σ_1^2 is known. The test statistic T_n^' for the likelihood ratio test associated to the above hypothesis can be expressed in terms of n,X ‾_n, and σ_1^2.
H_1:μ_1≠5


Homework 6: Introduction to Hypothesis Testing
Assume you observe X=1.32, and What is the value of your test ψ_α with level α=0.05?


Midterm Exam 1
The lifetime (in months) of a battery is modeled by a random variable X that has pdf
f_θ (x)=Kθ^x 1(x>0) where K=ln(1/θ)
for an unknown parameter θ∈(0,1). (Here 1(x>0) is the indicator variable that takes value 1 when its argument is true, i.e. when x>0.)
Assume that we have n independent observations X_1,…,X_n of the lifetime of n batteries of the same type. We want to use these observations to estimate θ∈(0,1).

Homework 5: Maximum Likelihood Estimation Review , Method of Moments
For each of the following distributions, give the method of moments estimator in terms of the sample averages X ‾_n and (X_n^2 ) ̅, assuming we have access to n i.i.d. observations X_1,…,X_n. In other words, express the parameters as functions of E[X_1 ] and E[X_1^2 ] and then apply these functions to X ‾_n and (X_n^2 ) ̅.


Homework 4: Maximum Likelihood Estimation, EM Algorithm (Univariate)
We want to compute the asymptotic variance of θ ˆ via two methods.
In this problem, we apply the Central Limit Theorem and the 1-dimensional Delta Method. We will compare this with the approach using the Fisher information next week.
First, compute the limit and asymptotic variance of (X_n^2 ) ̅.

Homework 3: TV distance, KL-Divergence, and Introduction to MLE
What can we do when we have prior knowledge about the parameter? Imagine that an expert told you that the parameter θ lies between a and b. Would that additional knowledge change the MLE calculation? We will start by calculating just the standard MLE. We will then think about what we can do with this additional knowledge in part (c).
Let X_1,…,X_n be n i.i.d. random variables with probability density function
f_θ (x)=θx^(-θ-1), θ>0, x≥1


Homework 2: Statistical Models, Estimation, and Confidence Intervals
Find an interval I_θ (that depends on θ ) centered about X ‾_n such that P(I_θ∋θ)=0.9□( ) for all n( i.e, not only for large n).
(Write barx_n for X ‾_n. Use the estimate q_0.05≈1.6448 for best results.)

Homework 1: Estimation, Confidence Interval
Let X_1,…,X_n be i.i.d. uniform random variables in [0,θ], for some θ>0. Denote by
M_n=max┬(i=1,…,n) X_i
Compute the following probabilities:


Homework 0
Moments of Gaussian random variables
Let X be a Gaussian random variable with mean µ and variance delta2. Compute the following moments:Remember that we use the terms Gaussian random variable and normal random variable interchangeably.(Enter your answers in terms of µ and delta.)


Want to know more ... | Can't wait to score? | Want to get admitted to MIT? | Subscribe to score!!!
Sign up now and get more than 50% off the rack discount!
We can't tell you how we are your best tutor and the answer key to your exams and studies; however, it does have value beyond scoring in your MITx work.
In fact, if you want to get the credentials and not waste your monies for MIT's admission into the SCM program, and have the best learning experience possible, then, you need to use theexamhelper to its full potential. And that applies to the materials as well as supplemental materials – wherever theexamhelper's Solution Key that has explanations and solutions.
In fact, if you want to get the credentials and not waste your monies for MIT's admission into the SCM program, and have the best learning experience possible, then, you need to use theexamhelper to its full potential. And that applies to the materials as well as supplemental materials – wherever theexamhelper's Solution Key that has explanations and solutions.
What Are the Benefits of Using theexamhelper's Solution Key?
There are 3 main benefits from following this process for completing and reviewing your work.
-
Enhanced Understanding of the Concepts Covered
-
Improved Self-teaching Skills
-
Advanced Progress Tracking
-
Get high scores for your exams
-
Become a Super Learner
-
Get admitted into MIT's Masters in Applied Science in Supply Chain Management in MIT
Our Students work at these places
Learners
Solutions
Hrs/Weekly
Special offer
For a limited time!
Why wait? Pay now or pay later, get the same solutions!
Exclusive
Deal
50% OFF
Sign up now to enjoy 50% off! While course last.