Skip to main content

The first exercises

1. Counting theory:
We may all remember: Given n objects, the number of ways of ordering these objects is:
n! = n(n-1)(n-2)..3.2.1.

n choose k, which is the number of distinct ways of choosing k objects from n: (n k) = n! / ( k! * (n-k)! )

For example:
If we want to form a random group of 3 students among 20 students, there are:
20! / (3! * 17!) = 1140 possible groups

2. Probability
- First problem: Discrete probability distribution for the sum of two dice.


- Second problem: Two people take turns trying to sink a basketball into a net. Person 1 succeeds with probability 1/3, person 2 with 1/4. What is the probability that person 1 succeeds before person 2?

Comments

  1. Solution for problem 2:

    Let E be the event that person 1 succeeds before person 2.

    Aj be the event that person 1 succeeds before person 2 and the first success is on trial number j.

    Hence, P(E) = P(A1) + P(A2) + .. + P(A-infinite)

    P(A1): person 1 succeeds on the first trial = 1/3
    P(A2): person 1 misses, 2 misses, then 1 succeeds = 2/3 * 3/4 * 1/3 = 1/2 * 1/3
    .
    .
    .
    P(Aj) = (1/2)^(j-1) * (1/3)

    Therefore,
    P(E) = 2/3

    Note:
    We used: sum_(j=k)^{infinite}{r^j} = r^k/(1-r)

    ReplyDelete

Post a Comment

Popular posts from this blog

Pytorch and Keras cheat sheets

Python Tkinter: Changing background images using key press

Let's write a simple Python application that changes its background image everytime you click on it. Here is a short code that helps you do that: import os, sys import Tkinter import Image, ImageTk def key(event): print "pressed", repr(event.char) event.widget.quit() root = Tkinter.Tk() root.bind_all(' ', key) root.geometry('+%d+%d' % (100,100)) dirlist = os.listdir('.') old_label_image = None for f in dirlist: try: image1 = Image.open(f) root.geometry('%dx%d' % (image1.size[0],image1.size[1])) tkpi = ImageTk.PhotoImage(image1) label_image = Tkinter.Label(root, image=tkpi) label_image.place(x=0,y=0,width=image1.size[0],height=image1.size[1]) root.title(f) if old_label_image is not None: old_label_image.destroy() old_label_image = label_image root.mainloop() # wait until user clicks the window except Exception, e: # Skip a...

Word embeddings

In this post, we are going to talk about word embedding (or word vector), which is how we represent words in NLP. Word embedding is used in many higher-level applications such as sentiment analysis, Q&A, etc. Let's have a look at the most currently widely used models. One-hot vector is a vector of size V, with V is the vocabulary size. It has value 1 in one position (represents the value of this word "appears") and 0 in all other positions. [0, 0, ... 1, .., 0] This is usually used as the input of a word2vec model. It is just operating as a lookup table. So this one-hot encoding treats words as independent units. In fact, we want to find the "similarity" between words for many other higher-level tasks such as document classification, Q&A, etc. The idea is: To capture the meaning of a word, we look at the words that frequently appear close-by this word. Let's have a look at some state-of-the-art architectures that give us the results of word ve...