Machine learning in layman’s terms
Imagine you have infinite database containing all facts about the world. It could answer all possible questions, if only it was real. Alas, nobody knows all the facts, and computers have limited memory anyway.
However, we can aim for approximation of such database. Things in world are connected, and transition gradually from one to another. If we know answer to some specific question, the answer to similar question will also be similar. This thinking is based on empirical evidence, and seems to work ok when modelling the universe on a large scale. On small scale, if we go to quantum physics, there are indications that this might not be true (hence all this “free will” topic).
Imagine world is a pond of water. We have measured the temperature at certain points. Knowing that temperature changes gradually from point to point, we can have reasonable guesses about the temperature at unknown points.
That’s basically what ML does, its called interpolation in mathematics.
Digging a bit deeper, there are many interpolation algorithms, it depends on your model which one works best. Those algorithms are very simple, and in general if you are modelling nontrivial problem they won’t solve anything if there are not enough data points.
In a sense, purpose of ML is glorified interpolation. Real world algorithms used in ML usually don’t just take your data points as is, they compress that data into internal structures (neural nets, matrices, etc.), which causes some loss in accuracy, but makes model a lot smaller. It’s called “Learning”.
So here it is, the secret about ML. My hope was to make this description helpful for those interested in understanding what ML can solve in general. If this felt too short, feel free to re-read 5x in a row.