Skip to main content

Parameter estimation

parameter-estimation

Parameter Estimation

Fundamentals

Problem Statement

Suppose that the population distribution follows a parameteric model $f(x|\theta)$ and given a random sample $X_1,X_2, ..., X_n$ from the population $X_i\tilde{} f(x|\theta)$, estimate the parameter of interest $\theta$

Basic assumption in parametric estimation is that the population distribution follows some parameteric model. Here, parametric models are those of the form:

$$\mathcal{F}=f(x,\theta), \theta\in\Theta$$

where $\Theta\subset R^k$ is the parameter space, and $\theta$ is the parameter.

Example

  1. Normal distribution has two parameters $\mu$ and $\sigma$

Terminologies

  1. Estimator $\hat{\theta}$ is a rule to calculate an estimate of a given quantity (model parameter) based on observed data.
  2. Estimate is a fixed value of that estimator for a particular observed sample.
  3. Statistic is a function of the data, e.g. sample mean
  4. Population distribution
  5. Sampling distribution of a statistic

Example

  • A pool seeks to estimate the proportion $p$ of adult residents of a city that support building a new sport stadium. Suppose that n is the sample size and $\hat{p}$ is the sample proportion, the rule that calculates the sample proportion is called the estimator of the population proportion. The actual value of sample proportion on the observed sample is called the estimate. The sample proportion is a statistic of the sample, note that the statistic itself doesn't need to associate with any parameter of interest.

Point Estimation

Point estimation involves the use of sample data to calculate a simple value which is the best estimate of an unknown population parameter.

Method of Moments

Let $X_1, X_2, ..., X_n$ are iid random variables from a parametric model $f(X,\theta)$ where $\theta=(\theta_1,\theta_2, ..., \theta_k)$ is a vector of $k$ parameters. We are interested in estimating $\theta$.

Moments

  • $\mu_k=E[(X-c)^k]$ is the k-th (theoretical) moment of the distribution around $c$, for k=1, 2, etc.

  • $A_k=1/n\sum_{i=1}^n (X-c)^k_i$ is the k-th sample moment around $c$, for k=1,2,etc.

Moments are often used to indicate moments around zero ($c=0$).
For $k>1$, we also use $c=\mu$, central moments. The second order central moment is the variance.

Suppose that the first K order moments of population exists, equating K theoritical moments to K sample moments gives us K equations with K unknowns.

$$E(X^k)=\frac{1}{n}\sum_{i=1}^nX^k$$

Solving these equations gives us the Method-of-moment estimators for K parameters of interest.

Examples 1 Method of moments estimator for uniform distribution

Assume that $X\tilde{}U(a,b)$ where a,b are unknown. We obtain a sample (1,2,3,4,5) from the uniform population, find the method-of-moments estimator for a,b.

The density function is

\begin{equation} f(x)= \begin{cases} \frac{1}{b-a} & a \leq x \leq b\\ 0 & \mbox {otherwise} \end{cases} \nonumber \end{equation}

The first theoretical moment:

$$E(X)=\int_a^bxf(x)dx=\frac{x^2}{2(b-a)}\biggr|_a^b=\frac{a+b}{2}$$

The second theoretical central moment:

$$E(X^2) = Var(X)+E(X)^2$$

$$Var(X)=\int_a^b(X-\frac{a+b}{2})^2*\frac{1}{b-a}dx=\frac{(b-a)^2}{12}$$

Comments

Popular posts from this blog

Skip-gram model and negative sampling

In the previous post , we have seen the 3 word2vec models: skip-gram, CBOW and GloVe. Now let's have a look at negative sampling and what it is used to make training skip-gram faster. The idea is originated from this paper: " Distributed Representations of Words and Phrases and their Compositionality ” (Mikolov et al. 2013) In the previous example , we have seen that if we have a vocabulary of size 10K, and we want to train word vectors of size 300. Then the number of parameters we have to estimate in each layer is 10Kx300. This number is big and makes training prone to over-fitting and gives too much focus on words that appear often, and less focus on rare words. Subsampling of frequent words So the idea of subsampling is that: we try to maximize the probability that "real outside word" appears, and minimize the probability that "random words" appear around center word. Real outside words are words that characterize the meaning of the center word, wh

Spam and Bayes' theorem

I divide my email into three categories: A1 = spam. A2 = low priority, A3 = high priority. I find that: P(A1) = .7 P(A2) = .2 P(A3) = .1 Let B be the event that an email contains the word "free". P(B|A1) = .9 P(B|A2) = .01 P(B|A3) = .01 I receive an email with the word "free". What is the probability that it is spam?

Pytorch and Keras cheat sheets