最近 AlphaGO 的媒体新闻和评论里经常看到人工智能如何如何之类的评论,但是我总觉得现在机器学习的玩法和 AI 的最初定义(以及在大众传播里的意义)并不是一个东西。
第一次意识到这种区别是从这里看到的 http://dustycloud.org/blog/sussman-on-ai/
At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court." (I know, that definitely sounds like out-there science fiction, bear with me... keeping that frame of mind is useful for the rest of this.)
在高度概括的语境下,可以说 Deep Learning 是模仿了人类学习的方法,但其实还是基于概率和统计的,更贴切的比方是知其然不知其所以然的学习,而更重要的是这条路继续走下去也只能提高“学习”的能力,但并不会发展出逻辑思考的能力。
1
bdbai 2016-03-13 00:45:49 +08:00 via iPhone
很多人一听说 AlphaGo 会"学习"就开始杞人忧天了。他们说的应该是强 AI ,而这东西暂时还没有被实现。
|
2
laoyuan 2016-03-13 11:01:52 +08:00
why can't ,比如让 Deep Learning 写程序,让它写出来一个知道自己学的是什么的程序。
认识世界就等于认识自己,因为自已也是世界的一部分。 |