Skip to main content

Posts

Building a chatbot

Recent posts

Deep learning with Multi-GPUs

Distributed deep learning is a hot topic as it increases the training time in compared to single GPU depending on the problems and the data that you deal with. However, modifying code to make your single-GPU program become multi-GPUs is not always straightforward! In this post, I'm going to talk about the topic of training deep learning models using multi-GPUs: why it is challenging, which factors are important when making deep learning distributed, and finally which libraries/frameworks to use to make this process easier. This post is based on the content of the course " Fundamentals of Deep Learning for Multi-GPUs ", offered by NVIDIA, which I have taken recently. GPU is the platform for deep learning, which makes deep learning accessible. Multi-GPUs are used to further speed up the computations, in cases single GPU memory is not efficient. The first question is: Do you need to use multi-GPUs or is it enough to train your deep learning models using single-GPU? It

Pytorch and Keras cheat sheets

Parameter estimation

parameter-estimation Parameter Estimation ¶ Fundamentals ¶ Problem Statement Suppose that the population distribution follows a parameteric model $f(x|\theta)$ and given a random sample $X_1,X_2, ..., X_n$ from the population $X_i\tilde{} f(x|\theta)$, estimate the parameter of interest $\theta$ Basic assumption in parametric estimation is that the population distribution follows some parameteric model . Here, parametric models are those of the form: $$\mathcal{F}=f(x,\theta), \theta\in\Theta$$ where $\Theta\subset R^k$ is the parameter space, and $\theta$ is the parameter. Example Normal distribution has two parameters $\mu$ and $\sigma$ Terminologies Estimator $\hat{\theta}$ is a rule to calculate an estimate of a given quantity (model parameter) based on observed data. Estimate is a fixed value of that estimator for a particular observed sample. Statistic