Thu Jan 28, 2016
Today I saw a post from Mark Zuckerberg on his project of automating a home. That great project is so awesome that it could probably change the future of human living. Along with the post there are arguments that common sense may be obtained through unsupervised learning. But personally I do not buy these arguments, at least not in the current form of unsupervised learning.
The intuition that a model can obtain common sense through unsupervised process is not straightforward to me. If that intuition came from how we human beings obtain common sense, a large portion of it came from a well-designed and strongly-supervised form of teaching and learning in school. Even prior to school, we learn via a feedback process with the world we interact, and at the same time under a strongly-supervised ‘protection’ from our parents. There are many many different feedback signals available to us.
On the other hand, if we think about the entire knowledge of human race, common sense seems to come unsupervisedly in the sense that we generate it via thousands years of evolution. But again that relies on the fact that the human race is not a single individual – it consists of many many individuals and only through this complicated dynamic system we form what we today call common sense. Therefore it seems to miss the point to ask a single unsupervised model to generate all of common sense.
However, I also feel that this seemingly unsupervised process of evolution is not feedback-free. Any living creature had to strive for maintain its low-entropy state (a structured state constituting of cells and their internal orderly structure), and release high-entropy waste to the environment via metabolism. Massive energy is used by such a creature during the process. It is through the will of better maintaining this low-entropy state that we evolve. Of course, machines do not have to do this, they just have to make ‘surving humans’ as their ultimate goal.
But the question is, how do we design a race of machines such that they can evolve to better surve us? And if this is possible, will the evolving machine population generate things similar to the phenomenon of ‘common sense’?
P.S. Ultimately I feel it is meaningless to argue about whether something is supervised or unsupervised. In current form of machine learning one always needs to do optimization, therefore one needs to design an objective. We can always argue that the objective is a kind of supervised signal. The mindset of labeling something as ‘unsupervised’ more or less misled us to think less about the mechanism of how some intelligent phenonmena really happened.