Summarizing the Neural network course in 6 points

I completed the Neural network and Deep learning course by Andrew ng on Coursera.

My Goal: My aim to do the course was to understand what is neutral networks, what is deep learning and get hands on experience in building a neural network.

I wrote this blog post to concretize my understanding of the concepts I learnt about neural network and share some insights with people like you who are planning to learn neural networks.

By reading this post, you will

  1. Be able to decide if you should enroll in the course. Of course, this post will not be the sole reason for your decision.
  2. Understand the basics of a neural network : The 6 points mentioned in this post are the key elements of any neural network. Even if you do not enroll in the course, by reading this you will have basic understanding of the neural networks.

So, following are the 6 things I learnt about neural networks in the course.

  1. What is Neural network

It is a methodology used in Machine Learning and AI to overcome the limitations of simple algorithm. 
Data Science is all about finding patterns in a data and creating mathematical models of the pattern. In neural network, the basic idea is to use multiple layers stacked together such that the algorithm can better learn the pattern that defines the underlying object in the training dataset.
Each layer is a mathematical function. Each layer processes the input and gives out an output which acts as an input to the next layer and so on.

  1. What is the Difference between using a neural network and plain algorithm

  •  The training set is processed by more than 1 layer.
  • In neural network the algorithm tries to learn on its own and so there are 2 steps.
  • One is forward propagation and the second is backward propagation. This reminded me of the concept of Feedback amplifier in Physics learnt in school.

  1. What is deep learning

It is a marketing term used for neural network with more than 1 hidden layers.

  1. What is Vectorization

It is all about using pre-compiled C code for acting upon the complete array with high performance. Neural network uses numpy arrays and vectorization is one of the reasons that makes training neural networks in reasonable time possible.

  1. The math behind Neural works

In the class and the practice assignment, you get to go through and understand the math used in designing neural networks. It is very exciting to see the math formulae being converted into python code to build the neural network. And it actually works :) The math used in the class was calculus, matrix calculations, basic trigonometry like slope calculation. The matrix is represented as an array of array using numpy.

  1. Broadcasting in python

This has been the most difficult concept to understand in the entire course. The second would be the calculus. Broadcasting is basically a algorithm used by numpy to operate on array when the sizes of the input arrays do not match.

Hope you enjoyed reading the post and learnt basics of neural network. 
Do share your feedback in the comments section.

Post a Comment