Working With Big Data
When we work with Big Data we need some way to speed up our algorithm. The common way to do this is to use a light algorithm, work with on-line learning or distribute the workload among different computer or processor.
A Light Algorithm to Work With Big Data
A way to speed up our application is instead of use batch gradient descent use stochastic gradient descent. It is important to say that stochastic gradient descent is not suitable to small data it only works better to large data. The step we need to follow is:
- Randomly ‘shuffle’ the dataset
- For i=1…m
- Compute the new parameter
This algorithm does not always get in a global optimal, but it gets really close.
The intuition behind on-line learning is that for each new example that we get, we use this to tune work algorithm. The step is the follow:
- repeat forever
- computer new parameter based on new example (x, y)
- exclude the new example
The on-line learning does not consume memory and adapt to the user.
Sometimes we really need the data, so the only option is to use a distributed approach know as map-reduce. The intuition is the follow:
1: divide our data in n distinct parts
2: process each part in a different computer
3: merge the data