国内云计算的缺失环节: GPU并行计算

2016-05-03 祁海江博士 智能投资研究院

祁海江:青岛五脉泉信息有限公司技术主管,宾夕法尼亚大学博士,南京大学硕士。多年从事图形图像、3D视觉、神经计算、机器学习等算法研究。

【IT时代周刊编者按】云计算特有的优点和巨大的商业前景,让其成为了近年来的IT界最热门词汇之一。当然,这也与中国移动互联网的繁荣紧密相关,它们需要有相应的云计算服务作为支撑。但本文作者祁海江结合自身的经验,对国内目前的云计算服务进行观察后认为,国内云服务商多数采用过于简单粗放的“远程机房+移动大硬盘”模式,不能满足并行图形处理的计算需求,“应认清技术潮流,整合前沿计算工具,尽快推进云GPU并行计算服务,促进中国移动互联网整体技术水准攀升。”那么云GPU并行计算服务有多重要?作者在文中作了深入浅出的解读,字里行间也能一窥国内云服务的夸大与事实上的落后现状。

很长时间以来,云计算成了一个热闹词汇。那么到底什么是云计算呢?它本质上是一种社会智力资源的共享,通过云端的技术封包,降低了难度门槛,使得更多用户可以采用各种原本“很难很先进”的技术。

这种技术可以应用到什么地方呢?我们看到现在中国的移动互联新经济高度繁荣,这就需要有相应技术高度的云计算服务作为龙骨支撑。但现在中国的云服务商多数采用过于简单粗放的“远程机房+移动大硬盘”模式,不能满足并行图形处理的计算需求。按照当今计算技术的趋向看——“视频音图+3D+规模机器学习+大数据分析=》高强度计算任务=》云GPU并行运算”,运营商应尽快认清技术潮流,整合前沿计算工具,尽快推进云GPU并行计算服务。这是因为:

1:现行的图形、图像及3D计算在各种视频游戏、电影产业、工业设计、医疗成像、空间探索、远程通讯等方面有着广泛的应用。

        随着计算机技术的发展,人们对图形和图像的处理要求也越来越高,尤其现在兴起的3D技术,使图形图像处理和3D计算已经应用到了各种视频游戏,电影产业,医疗成像,空间探索,远程通信等各个方面。

现在风靡的大型3D游戏,诸如《使命召唤》《极品飞车》等,这些游戏画面逼真,3D特效强烈,所以要求计算机对图形图像的处理能力要求也非常高。 2010年放映的电影《阿凡达》开创了动画形象代替了演员的3D电影的先河,它完美的运用了3D立体画面的创造了逼真的效果使画面美轮美奂。在工业设计上,有很多广为人们熟知的3D处理软件,例如AutoCAD,Maya,SolidWorks等知名软件。在医疗成像方面,3D/4D立体成像技术,使医护人员可以获得从传统平面显示无法捕捉到的信息数据,能够360度全方位立体读取影像信息,为临床诊断提供了更丰富、精准的影像资料,大幅度降低了对病灶的漏诊,提高了诊疗质量,必将掀起医疗影像信息处理的一次技术革命。

伴随着IT互联网以及手持终端的发展和普及,要处理的数据量的爆发式增长,手机上也出现了3D游戏的发展趋势,这些都对数据图像和3D计算提出了更多的需求。

由此看来,目前对图形图像以及3D计算的巨大需求,已经要求计算机需要具备强大的3D建模能力,然而CPU的串行处理能力远不能满足高效的处理图像以及3D计算的能力,因此并行计算技术的使用日益广泛。

2:以美国NVIDIA公司图形显示卡的CUDA运算包为代表的GPU并行运算技术,已成为工作站、服务器、个人电脑的标准组件。

       GPU是电脑图形显示卡上负责图像运算工作的微处理器。著名的显示卡公司NVIDIA为其主流显卡产品设计了专门的GPU并行计算工具包,称之为CUDA(ComputeUnifiedDeviceArchitecture,统一计算架构)。

以GeForce8800GTX为例,其核心拥有128个内处理器。利用CUDA技术,就可以将那些内处理器串通起来,成为线程处理器去解决数据密集的计算。而各个内处理器能够交换、同步和共享数据。利用NVIDIA的C-编译器,通过驱动程序,就能利用这些功能。亦能成为流处理器,让应用程序利用进行运算。GeForce8800GTX显示卡的运算能力可达到520GFlops,如果建设SLI系统,就可以达到1TFlops。

有软件厂商利用CUDA技术,研发了一个AdobePremierePro的插件。通过插件,用户就可以利用显示核心去加速H.264/MPEG-4AVC的编码速度。速度是单纯利用CPU作软件加速的7倍左右。

NVIDIA从所有基于G80及之后架构的民用与专业显卡或运算模块皆支持CUDA技术。整体运算能力比单纯利用CPU的速度提高7倍甚至更高。TeslaGPU是针对工作站和服务器的加速器,与消费级显卡和专业图形卡相比,具有完整的双精度浮点运算性能,具备双DMA引擎可满足双向PCIe通信,板载内存达到12G(TeslaK40GPU),具有专门的Linux补丁、InfiniBand驱动程序以及CUDA驱动程序,针对Windows 操作系统的CUDA驱动程序可实现更高性能,TCC驱动程序可减少CUDA内核的系统总开销并支持远程桌面(WindowsRemoteDesktop) 以及Windows服务。

3:以CUDA为代表的GPU并行计算技术,在多个领域已发挥重要作用。

  • 在科研界,CUDA应用广泛。例如,CUDA现已能够对AMBER进行加速。AMBER是一款分子动力学模拟程序,全世界在学术界与制药企业中有超过60,000名研究人员使用该程序来加速新药的探索工作。
  • 在金融市场,Numerix以及CompatibL针对一款全新的对手风险应用程序发布了CUDA支持并取得了18倍速度提升。Numerix为近400家金融机构所广泛使用。
  • 在消费级市场上,几乎每一款重要的消费级视频应用程序都已经使用CUDA加速或很快将会利用CUDA来加速,其中不乏ElementalTechnologies公司、MotionDSP公司以及LoiLo公司的产品。

4:NVIDIA公司非常重视GPU并行计算在云服务器上的嫁接,美国已有数家云服务商提供GPU并行的云计算服务

  • 2009年10月20日,NVIDIA与Mentalimages联合推出一款基于云计算的高端服务器——RealityServer。
  • 2012年5月17日,NVIDIA推出利用GPU加速云计算技术。
  • 2012年10月17日,NVIDIA推出了首款云计算虚拟GPU加速平台——VGXK2。
  • 2013年GTC大会上,NVIDIA带来了在云计算领域最新的产品服务器平台——NVIDIAGRID。

随后几年时间里,美国多家服务器厂商推出了各自的基于GPU并行计算的云服务平台。现在提供GPU云计算的服务提供商主要有Amazon,Nimbix,Peer1Hosting,SoftLayer, PenguinComputing等。

5:一个让人十分费解的局面是,国内各大云服务提供商(诸如阿里云、盛大云、万网云)似乎对GPU并行计算没有任何动作。

        自从云计算的概念提出,迅速在中国IT界形成了热点,大大小小的云服务商如雨后春笋般出现。几大云服务商以各种名目强调自身特色的云计算服务组合,如阿里云的“飞天”平台;百度BAE云平台;浪潮集团建立的HPC/IDC、媒体云、教育云;华为公司弹性云计算FusionCloud战略;腾讯云生态系统;华云数据公司推出的运营型PaaS平台。

然而在形形色色的各种名号之下,各家公司的服务内容非常同质化,基本都是网络存储 + 虚拟CPU计算时段租用的模式。对用户真正的运算需求理解挖掘不够,往往只是把一些浅层的PC功能简单转移到云端,对于复杂度高、维护难度大的运算功能未能提供虚拟层的解决方案。换句话说,凡是用户在PC端已经能轻松愉快做的事(比如办公软件),云服务商不厌其烦的去劝说用户将其转移到云端,而中小企业用户感到力不从心、真正需要帮助的具有技术难度的运算功能,云服务商就一问三不知了。

近期,笔者单位由于为客户开发的应用涉及高强度的数据处理,需要并行运算。我们与多个云服务商接洽,均未见有提供GPU并行运算服务。这是一个让人难以理解的局面,电话联系云服务商相关工作人员,他们的典型反应如下:

(客服人员)“这个我们不是很清楚,帮你转接技术人员”。

(技术支持)“没怎么听说过,这个国内好像还没有吧?”

(技术经理)“我们的服务器能不能加载GPU并行运算不清楚,不太了解市场有没有这样的需求”。

高性能并行计算主要采用CPU+GPU的异构模式,这种构架已经成功的在云服务器端实现资源虚拟化。但令人迷惑的是,中国国内各大云服务商的官网连 GPU并行运算的影子都看不到,甚至接触过的各大公司技术服务及营销人员似乎对GPU并行运算毫无概念。以下我们分别就几个问题,探讨这一尴尬局面的成因:

(1)难道GPU并行运算目前在国内没有市场?

(2)虚拟化GPU并行运算在国内的实施遇见技术上的困难?

(3)各大云服务公司管理层,是否对计算需求缺乏了解、对高性能技术发展不敏感?

(4)亦或是商务决策层与先进技术圈形成脱节?

        对上述的第(1)点市场因素:如前所述,随着图形图像、动画视频、3D运算、及大数据分析的广泛应用,对GPU并行运算的需求很高;而玩儿转这种高大上的前沿计算,普通中小企业在系统搭建、程序开发及维护都缺乏足够档次的常备技术队伍,因此非常需要云服务商的界入,降低此类技术的使用门槛,提供包括IaaS、PaaS、SaaS等整套共享租用服务。因此国内的市场需求是非常旺盛的。

        关于上述的第(2)点技术实施因素:虚拟化GPU运用于云计算服务的技术也早已成熟。如前所述,NVIDIA公司CUDA体系与云服务器已经有了完美的对接,在此基础上美国Amazon,Google,Joyent等公司均已提供相应的商业云计算租用服务。

2014年1月,曙光公司、NVIDIA公司、思杰公司合作推出“云图”(W760-G10),具备GPU硬件虚拟化的能力;虽然尚未见有明确的云租用服务,但是可以看出,技术实现并非阻碍所在。

       对上述的(3)管理层因素:近年成长起来的国内明星公司,如腾讯、阿里等,都经历了一个极短时间内的快速膨胀,很多早期人员随之自然升入高级管理层。然而,早期人员许多在自身的知识基础、学习能力方面存着严重的不足。大专生去面试本科生、研究生的现象实属常见。随着公司业务的拓展,整体技术积淀不足的弱点显露出来,管理层对技术的理解力与敏感度不够。

       对上述的第(4)点因素:中国IT及互联网的发展,曾长期奉行技术“拷贝主义”,精力心思多用于摸索中国土壤上的营利模式。中国企业对于应用层面的市场敏感度是相当出色的。但是,对于深层的技术策源动向,一直是忽视的。商务决策层需倚靠技术管理层的建议,而技术管理层或者自身够不着技术前沿、或者早已脱离技术前沿;中国高校科研机构以纯文章数为导向的研究风气,培养不出既尖端又实用的新鲜血液给企业,也鲜有学术专家真正花心思做好企业顾问。种种原因,商务决策层和先进技术圈是脱节的。

因此我们认为,对GPU并行云计算的市场需求和技术实现都不是问题所在。中国有志于做好云计算服务的各个公司,有必要进一步提升其技术管理层的技术素养、商务决策层的技术意识。

【IT时代周刊批注】虽然全球云计算市场保持着平稳增长的态势,但也应看到,各个国家间的云计算产业、市场和服务现况相距非常大。以中国为例,除了上述云服务企业存在的技术和服务因素外,传统的网络安全也不容忽视。有专家就指出,国内云服务商在网络安全方面的防护措施仍比较薄弱。国内典型云服务企业发生的安全事件中有50%是传统网络攻击造成的,占比达53%,随着我国云服务用户规模的不断扩大,安全问题数量也将迅速增长。

6:云计算商应脚踏实地、聚焦本质价值:帮助众多中小公司运用前沿计算工具,提升中国移动互联新经济的技术档次。

中国过去三十多年的经济奇迹,是从无到有、从低到高的迅速变换过程。在经济层次迅速攀升的年代,昨日的成功者、今日的弄潮儿,难免不受以往经验与习惯思维的影响。出于习惯,凡起商业项目,重视商业渠道的争夺、善于造势,对于产品内涵价值的挖掘却很欠缺。此种做生意的方式,不可否认在以往也取得过巨大的成功,但是我们也看到,每当新流行概念出现,就呈现“众口一词、一拥而上、简单复制、同质单一”的局面。善于热炒,而不扎实做事,不深入挖掘概念的内涵价值,如纳米、物联、机器人、3D打印等等流行热点比比皆是。概念固然很好,不做实做真,终究增加不了硬实力。

云计算服务,通过将繁琐的技术维护的移至云端,把“很难很先进”的技术功能打包封装,降低用户使用的技术门槛,实现社会智力资源共享。而现今中国的大大小小云服务商,简单讲就是个“远程机房+移动大硬盘”模式,意义着实有限。当今世界经济几乎唯中国一枝独秀,中国名企牛气冲天,何以搞个云计算却停留在如此低的层次?归根结底还在于商务观念未脱落低水平市场中粗放竞争的历史烙印,不习惯通过深挖技术内涵,发挥内在价值而建立商务优势的路线。

云计算作为一个“很实在很技术”东西,服务对象是商业应用型公司,具有理智化决策的行为特征。脑白金式的蒙蔽营销手法登峰造极,放在这里却未必有效。中国的移动互联新经济在应用层面,欣欣向荣,活力四射,全世界数一数二。相应的,云计算服务作为其龙骨支撑,不跟上是不行的。大量的视频音图+3D+ 规模机器学习+大数据分析=》高强度计算任务=》云GPU并行运算,这是一个非常简单明了的推导链条。何必瞻前顾后,剪不断理还乱?

【IT时代周刊编后】作者的这篇文章直接点出了国内云计算与先进国家的差距。在美国,以微软、谷歌、亚马逊等巨头为代表的IT企业,正在不断巩固自身在云服务上的优势地位,而且不断向海外市场拓展。有数据显示,全球前100个云计算企业中,超过80家是美国企业。而相比之下,国内云计算市场的体量虽说也在不断增大,但云服务企业的技术和服务有待提高、服务易用性、安全性、数据的迁移与分配等都有非常大的提高的空间,要做出调整和改进,首先需要改变以硬件采购为主的市场结构才行。

这门多数学校还没教的课程正在变得越来越“时髦”,你会让孩子去学么?

2016-05-01 王青等 小花生网

檩子:最近看到王青博士的一篇博文《美国孩子最时髦的课程是什么》;打开文章前,我猜想这最时髦的课程会是跟STEM有关的,结果还是没完全猜准,按王青博士的观察,答案是电脑编程;他分享了自己孩子在美国上学遭遇“电脑编程”的经历,还挺有意思的,先来看看他的描述:

对美国教育比较了解的朋友,大概都知道STEM教育这个概念,这是Science(科学),Technology(技术),Engineering(工程),和Math(数学)的首字母缩写。以科学技术为核心内容的STEM课程,被称作为21世纪的课程,颇有点高精尖的味道。

不过现在,美国说的都是STEAM了,多出来的A,指的是艺术 Art;艺术在这里出现,可不是让孩子们个个都成为毕加索,而是让艺术成为孩子认识世界和表达自己的工具。

而当前美国最“IN”的这门课,竟然跟STEAM里面的5项内容每一项都紧紧相关。它的提倡者直接就说了,这是现在每一个孩子今后在社会生存所必须具备的能力。

关于这门课程还有一个5分钟的宣传短片《多数学校不会教的东西》,在YouTube上已经有了1千多万次的收看,比尔盖茨和扎克伯格这样的人物都在片中露面倡导。

这门能够主宰今后社会生活每一个层面、目前在美国孩子中非常新锐的课程,就是电脑编程(Coding / Computer Programming)。


Everybody in this country should learn how to program, because it teaches you how to think.

“这个国家的每一个人都应该学习电脑编程,因为它会教你如何思考。”

前面提到的大热宣传短片,用了已故苹果大佬乔布斯的这句话开头。

那么多大的孩子可以开始学编程呢?美国比较普遍的操作,像STEAM一类的课程,一般在四年级出现。因为我儿子的学校校长特别注重于电脑科技,所以学校决定在三年级尝试开课。

这一试不打紧,惊动了我们选区的州议员,他动用自己的相关力量,在儿子的小学搞了一个现场观摩会。议员出面,不光是各类媒体跟风而至,周围学区的相关人员也都出动了,最重要的是,谷歌负责推广的地区经理也到场了,场面变得非常正式。好在学校有一个全部苹果设备的电脑室,挺拿得出手,而孩子们的表现更是神勇,一点怯场的意思都没有,急于要向大家展示自己的成果。

其实,孩子的编程内容一点也不难,有点类似于搭建电子积木,是在调动不同的模块,不光有形状颜色这些内容,还有运动、速度和声音这些维度。下面这张照片大致能看到孩子电脑屏幕上的工作,颇有点像在玩电子游戏。其实,我三年级的儿子在上了几个星期的课以后,就完全沉浸在电子游戏“设计”中了,利用这些搭积木的技术,开始想象和设计自己游戏的主角,从长相到声音到超能力,把创造性发挥得淋漓尽致。

现在这所小学的编程课程已经扩展到了二年级,从网上出现的报道来看,还有年纪更小的。

凡是跟儿童编程打过交道的相关人士都知道一点,编程对于孩子来说,就像天生的本事一样,其实并不费力气。都把电脑程序叫做“语言”,其实它跟人类语言有很多道理相通的地方。而孩子的天性,就是利用一切交际媒介表达自己,可以是母语,可以是母语里夹杂了外语,可以是画画唱歌,可以是肢体语言。到了用电脑语言表达自己这个层面,他们也是表现得那么自然,绝对的天生优势,没有仔细了解过的大人,很难想象的。宣传短片中包括扎克伯格在内的行业领军人物,都在强调,不需要等掌握了全部电脑科学之后,才开始学习编程。

 

的确,扎克伯格20岁的时候就创建了Facebook,他小学开始接触编程,在此之前已经积累了近十年的编程和创造产品的经验。而中国的学生,即使读完了计算机系,也还没有多少编程开发实战经验。

总之,这部大热的宣传片和王青博士的经历,告诉我们电脑编程已经存在于现代社会的每一个毛孔,无论是创业,还是完成一个project,或者解决一个具体问题,都离不开电脑编程,这方面的技能成了人们的一种基础能力,就跟懂基本的数学运算一样,趋势是越来越多的孩子会把电脑编程不仅仅作为兴趣爱好、而是当作基本技能去学。

之前,我们介绍过一些目前世界上儿童学编程的资源,今天正好贴在这里,一并作为参考!今天这篇文章最后有一个小投票,欢迎参加,发表你的看法!

目前教孩子学编程的主流App和网站

这些APP和网站都声誉卓著,普遍有这些特点:易上手、好玩、能产生作品,可以做动画、特效、游戏、网站和APP等,让孩子感到动力十足,而且很多都是免费的。

制作小动画,开始编程启蒙

1、Daisy the Dinosaur

这款 iPad APP 连幼儿园的小朋友都可以开始用。教孩子基本的编程逻辑;孩子们只需把相关的模块设定并排列好,如滚(roll)、跳(jump)或者长大(grow)等,然后再按下播放键,一个小动画就做成了,里面能看到小恐龙根据刚才的指令做出的相应动作。总之,很好上手,几乎没有任何难度,小朋友会很着迷于自己创作出来的小动画。适合年龄:4-8岁

2、Hopscotch: Coding for kids, a visual programming language

和 Daisy the Dinosaur 来自同一个开发商,这款iPad应用得过很多科技类奖项,像是 Daisy the Dinosaur 的升级版,多了很多模块和参数设置。在操作上还是很简单,不需要进行任何输入操作,就像是堆积木一样,把模块一个个放进去就好,点击播放就能看到各种卡通人物在屏幕上根据自己的指令做动作的动画。这个很锻炼孩子的逻辑理解能力,不仅要处理时间和空间的问题,还要给不同的角色分配不同的任务。它能让孩子独立地做出一部小动画片,很有成就感。适合年龄:8-12岁

开始认真学习一点编程

3、Scratch

在网上搜怎样教孩子学习编程,总会被带到这个网站,口碑非常好,全球有超过百万孩子在使用。可视化语言和接口是由美国麻省理工学院媒体实验室(MIT Media Labs)创建,即使孩子不了解复杂的程序语言,也可以轻松编程。孩子可以通过它来创建互动故事,动画,甚至游戏等,然后和全世界的朋友分享。适合年龄:8岁以上

4、Alice

估计男孩和女孩的思维方式是不太一样,所以还专门有为男孩和女孩各自设计的学编程软件。老外考虑得真周到。在美国,Alice 和 Scratch 是最著名的两个教孩子学编程的工具,针对女孩子学编程的Alice 由弗吉尼亚大学开发,名字来源于《爱丽丝漫游奇境》,主要教3D编程。在Alice里面,小朋友可以通过拖拽虚拟模块即可看到虚拟世界中3D精灵的实时变化,可以边玩变测试。开发者强调了这款软件的重点在于吸引年轻女孩来编程。适合:10岁以上女孩,适用设备:电脑

5、Codea

这是一款iPad应用,也是一个具有丰富资源带孩子编程的软件开发工具,得过年度最佳应用大奖。国外有孩子就用它自己做出APP游戏。大点的孩子,具有一定逻辑思维能力和理解能力,可以跟着走。界面简洁,简单易学是它最大的特点。重要的是,它有中文版,不会有语言障碍。适合年龄:8岁以上

6、RoboMind

RoboMind主要的功能是通过编程让机器人去执行一系列任务,这个过程中,孩子对人工智能会有基本了解。如果孩子在学LEGO的机器人课程,那这个就更适合了,它有一个导出功能,可以把你编的程序连接到LEGO MINDSTORMS NXT 2.0里去。适合年龄:10岁以上,适用设备:电脑

大孩子编程

7、Codecademy

Codecademy被认为是可以指导任何人学习编程的一款基于浏览器的应用,包括13岁以下的儿童。但这款应用并不像其他儿童应用,没有卡通风格的精灵和色彩丰富的界面,但它仍不失为一款友好的,简单易学的app。

通过Codecademy,12岁以上的儿童可以学习Python、Ruby、PHP、HTML或JavaScript等编程语言,甚至API。不过,该应用也正在扩大用户群体,尝试吸引一些年轻的程序员,鼓励学生和教育工作者参加他们在学校举办的编程俱乐部活动。

还有一本教孩子编程的书

这本《与孩子一起学编程》,如果家里有爸爸妈妈对这个感兴趣的,或者有点懂行的, 可以用这本书来教孩子学编程,或者和孩子一起学编程。

这本书的特点之前介绍过:

1、使用了大量贴近孩子生活的插图,凡是稍显复杂的概念,都尽可能用漫画比喻来辅助说明。
2、对于孩子们来说,纯粹的数学计算并不是那么的有趣;而能够做出一个看得到的东西,则是一件很有成就感的事情。这本书尽量做到这一点。

3、每一章的长度都不长,短小的学习单元有助于减少孩子们学习新事物时候的压力,也有利于维持他们的兴趣。

4、对于概念的讲述都非常的简单。涉及术语的地方,都尽可能用有亲和力的话语来说明。
编程少年的TED演讲

如果真的有心让孩子早点开始这方面的实践,不妨和他一起看看这位从小接触编程的12岁外国孩子的TED演讲-我是怎样开始编程的。他分享自己怎样走上编程之路:玩游戏,开发游戏,找到苹果的开发平台,学会利用网上资源自己去开发游戏,并且建立了一个孩子们开发各种应用的俱乐部……演讲很有范……

最后,一个小投票:

来源:本文第一部分引在王青博士授权小花生转载的新浪博客文章,有简化和编辑,资源部分由小花生网编写,媒体转载须获授权

机器学习——深度非技术指南 (五)

By Alex Castrounis

Source: http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide-part-5/

Chapters

  1. Overview, goals, learning types, and algorithms
  2. Data selection, preparation, and modeling
  3. Model evaluation, validation, complexity, and improvement
  4. Model performance and error analysis
  5. Unsupervised learning, related fields, and machine learning in practice

Introduction

Welcome to the fifth and final chapter in a five-part series about machine learning.

In this final chapter, we will revisit unsupervised learning in greater depth, briefly discuss other fields related to machine learning, and finish the series with some examples of real-world machine learning applications.

Unsupervised Learning

Recall that unsupervised learning involves learning from data, but without the goal of prediction. This is because the data is either not given with a target response variable (label), or one chooses not to designate a response. It can also be used as a pre-processing step for supervised learning.

In the unsupervised case, the goal is to discover patterns, deep insights, understand variation, find unknown subgroups (amongst the variables or observations), and so on in the data. Unsupervised learning can be quite subjective compared to supervised learning.

The two most commonly used techniques in unsupervised learning are principal component analysis (PCA) and clustering. PCA is one approach to learning what is called a latent variable model, and is a particular version of a blind signal separation technique. Other notable latent variable modeling approaches include expectation-maximization algorithm (EM) and Method of moments3.

PCA

PCA produces a low-dimensional representation of a dataset by finding a sequence of linear combinations of the variables that have maximal variance, and are mutually uncorrelated8. Another way to describe PCA is that it is a transformation of possibly correlated variables into a set of linearly uncorrelated variables known as principal components13.

Each of the components are mathematically determined and ordered by the amount of variability or variance that each is able to explain from the data. Given that, the first principal component accounts for the largest amount of variance, the second principal component the next largest, and so on.

Each component is also orthogonal to all others, which is just a fancy way of saying that they’re perpendicular to each other. Think of the X and Y axis’ in a two dimensional plot. Both axis are perpendicular to each other, and are therefore orthogonal. While not easy to visualize, think of having many principal components as being many axis that are perpendicular to each other.

While much of the above description of principal component analysis may be a bit technical sounding, it is actually a relatively simple concept from a high level. Think of having a bunch of data in any amount of dimensions, although you may want to picture two or three dimensions for ease of understanding.

Each principal component can be thought of as an axis of an ellipse that is being built (think cloud) to contain the data (aka fit to the data), like a net catching butterflies. The first few principal components should be able to explain (capture) most of the data, with the addition of more principal components eventually leading to diminishing returns.

One of the tricks of PCA is knowing how many components are needed to summarize the data, which involves estimating when most of the variance is explained by a given number of components. Another consideration is that PCA is sensitive to feature scaling, which was discussed earlier in this series.

PCA is also used for exploratory data analysis and data visualization. Exploratory data analysis involves summarizing a dataset through specific types of analysis, including data visualization, and is often an initial step in analytics that leads to predictive modeling, data mining, and so on.

Further discussion of PCA and similar techniques is out of scope of this series, but the reader is encouraged to refer to external sources for more information.

Clustering

Clustering refers to a set of techniques and algorithms used to find clusters (subgroups) in a dataset, and involves partitioning the data into groups of similar observations. The concept of ‘similar observations’ is a bit relative and subjective, but it essentially means that the data points in a given group are more similar to each other than they are to data points in a different group.

Similarity between observations is a domain specific problem and must be addressed accordingly. A clustering example involving the NFL’s Chicago Bears (go Bears!) was given in chapter 1 of this series.

Clustering is not a technique limited only to machine learning. It is a widely used technique in data mining, statistical analysis, pattern recognition, image analysis, and so on. Given the subjective and unsupervised nature of clustering, often data preprocessing, model/algorithm selection, and model tuning are the best tools to use to achieve the desired results and/or solution to a problem.

There are many types of clustering algorithms and models, which all use their own technique of dividing the data into a certain number of groups of similar data. Due to the significant difference in these approaches, the results can be largely affected, and therefore one must understand these different algorithms to some extent to choose the most applicable approach to use.

K-means and hierarchical clustering are two widely used unsupervised clustering techniques. The difference is that for k-means, a predetermined number of clusters (k) is used to partition the observations, whereas the number of clusters in hierarchical clustering is not known in advance.

Hierarchical clustering helps address the potential disadvantage of having to know or pre-determine k in the case of k-means. There are two primary types of hierarchical clustering, which include bottom-up and agglomerative8.

Here is a visualization, courtesy of Wikipedia, of the results of running the k-means clustering algorithm on a set of data with k equal to three. Note the lines, which represent the boundaries between the groups of data.

https://commons.wikimedia.org/wiki/File:KMeans-Gaussian-data.svg

There are two types of clustering, which define the degree of grouping or containment of data. The first is called hard clustering, where every data point belongs to only one cluster and not the others. Soft clustering, or fuzzy clustering on the other hand refers to the case where a data point belongs to a cluster to a certain degree, or is assigned a likelihood (probability) of belonging to a certain cluster.

Method comparison and general considerations

What is the difference then between PCA and clustering? As mentioned, PCA looks for a low-dimensional representation of the observations that explains a good fraction of the variance, while clustering looks for homogeneous subgroups among the observations8.

An interesting point to note is that in the absence of a target response, there is no way to evaluate solution performance or errors as one does in the supervised case. In other words, there is no objective way to determine if you’ve found a solution. This is a significant differentiator between supervised and unsupervised learning methods.

Predictive Analytics, Artificial Intelligence, and Data Mining, Oh My!

Machine learning is often interchanged with terms like predictive analytics, artificial intelligence, data mining, and so on. While machine learning is certainly related to these fields, there are some notable differences.

Predictive analytics is a subcategory of a broader field known as analytics in general. Analytics is usually broken into three sub-categories: descriptive, predictive, and prescriptive.

Descriptive analytics involves analytics applied to understanding and describing data. Predictive analytics deals with modeling, and making predictions or assigning classifications from data observations. Prescriptive analytics deals with making data-driven, actionable recommendations or decisions.

Artificial intelligence (AI) is a super exciting field, and machine learning is essentially a sub-field of AI due to the automated nature of the learning algorithms involved. According to Wikipedia, AI has been defined as the science and engineering of making intelligent machines, but also as the study and design of intelligent agents, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success

Statistical learning is becoming popularized due to Stanford’s related online course and its associated books: An Introduction to Statistical Learning, and The Elements of Statistical Learning.

Machine learning arose as a subfield of artificial intelligence, statistical learning arose as a subfield of statistics. Both fields are very similar, overlap in many ways, and the distinction is becoming less clear over time. They differ in that machine learning has a greater emphasis on prediction accuracy and large scale applications, whereas statistical learning emphasizes models and their related interpretability, precision, and uncertainty8.

Lastly, data mining is a field that’s also often confused with machine learning. Data mining leverages machine learning algorithms and techniques, but also spans many other fields such as data science, AI, statistics, and so on.

The overall goal of the data mining process is to extract patterns and knowledge from a data set, and transform it into an understandable structure for further use26. Data mining often deals with large amounts of data, or big data.

Machine Learning in Practice

As discussed throughout this series, machine learning can be used to create predictive models, assign classifications, make recommendations, and find patterns and insights in an unlabeled dataset. All of these tasks can be done without requiring explicit programming.

Machine learning has been successfully used in the following non-exhaustive example applications1:

  • Spam filtering
  • Optical character recognition (OCR)
  • Search engines
  • Computer vision
  • Recommendation engines, such as those used by Netflix and Amazon
  • Classifying DNA sequences
  • Detecting fraud, e.g., credit card and internet
  • Medical diagnosis
  • Natural language processing
  • Speech and handwriting recognition
  • Economics and finance
  • Virtually anything else you can think of that involves data

In order to apply machine learning to solve a given problem, the following steps (or a variation) should to be taken, and should use machine learning elements discussed throughout this series.

  1. Define the problem to be solved and the project’s objective. Ask lots of questions along the way!
  2. Determine the type of problem and type of solution required.
  3. Collect and prepare the data.
  4. Create, validate, tune, test, assess, and improve your model and/or solution. This process should be driven by a combination of technical (stats, math, programming), domain, and business expertise.
  5. Discover any other insights and patterns as applicable.
  6. Deploy your solution for real-world use.
  7. Report on and/or present results.

If you encounter a situation where you or your company can benefit from a machine learning-based solution, simply approach it using these steps and see what you come up with. You may very well wind up with a super powerful and scalable solution!

Summary

Congratulations to those that have read all five chapters in full! I would like to thank you very much for spending your precious time joining me on this machine learning adventure.

This series took me a significant amount of time to write, so I hope that this time has been translated into something useful for as many people as possible.

At this point, we have covered virtually all major aspects of the entire machine learning process at a high level, and at times even went a little deeper.

If you were able to understand and retain the content in this series, then you should have absolutely no problem participating in any conversation involving machine learning and its applications. You may even have some very good opinions and suggestions about different applications, methods, and so on.

Despite all of the information covered in this series, and the details that were out of scope, machine learning and its related fields in practice are also somewhat of an art. There are many decisions that need to be made along the way, customized techniques to employ, as well as use creative strategies in order to best solve a given problem.

A high quality practitioner should also have a strong business acumen and expert-level domain knowledge. Problems involving machine learning are just as much about asking questions as they are about finding solutions. If the question is wrong, then the solution will be as well.

Thank you again, and happy learning (with machines)!

By on

About the Author: Alex Castrounis founded InnoArchiTech. Sign up for the InnoArchiTech newsletter and follow InnoArchiTech on Twitter at @innoarchitech for the latest content updates.


References

  1. Wikipedia: Machine Learning
  2. Wikipedia: Supervised Learning
  3. Wikipedia: Unsupervised Learning
  4. Wikipedia: List of machine learning concepts
  5. 3 Ways to Test the Accuracy of Your Predictive Models
  6. Practical Machine Learning Online Course – Johns Hopkins University
  7. Machine Learning Online Course – Stanford University
  8. Statistical Learning Online Course – Stanford University
  9. Latent variable model
  10. Wikipedia: Cluster analysis
  11. Wikipedia: Expectation maximization algorithm
  12. Wikipedia: Method of moments
  13. Wikipedia: Principal component analysis
  14. Wikipedia: Exploratory data analysis

机器学习——深度非技术指南 (四)

By Alex Castrounis

Source: http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide-part-4/

Chapters

  1. Overview, goals, learning types, and algorithms
  2. Data selection, preparation, and modeling
  3. Model evaluation, validation, complexity, and improvement
  4. Model performance and error analysis
  5. Unsupervised learning, related fields, and machine learning in practice

Introduction

Welcome to the fourth chapter in a five-part series about machine learning.

In this chapter, we will take a deeper dive into model evaluation and performance metrics, and potential prediction-related errors that one may encounter.

Residuals and Classification Results

Before digging deeper into model performance and error types, we must first discuss the concept of residuals and errors for regression, positive and negative classifications for classification problems, and in-sample versus out-of-sample measurements.

Any reference to models, metrics, or errors computed with respect to the data used to train, validate, or tune a predictive model (i.e., data you have) is called in-sample. Conversely, reference to test data metrics and errors, or new data in general is called out-of-sample (i.e., data you don’t have).

Recall that regression involves predicting a continuous valued output (response) based on some set of input variables (features/predictors). The difference between the model’s predicted response value and the actual observed response value from the in-sample data is called the residual for each point, and residuals refers collectively to all of the differences between all predicted and actual values. Each out-of-sample (new/test data) difference is called a prediction error instead of residual.

For the classification case, and for simplicity, we will only discuss binary classification (two classes). Prior to performing classification on data observations, one must define what is a positive classification and what is a negative classification. In the case of spam or ham (i.e., not spam), spam may be the positive designation and ham is the negative.

If a model predicts an incoming email as being spam, and it really is spam, then that’s considered a true positive. Positive since the model predicted spam (the positive class), and true because the actual class matched the prediction. Conversely, if an incoming email is labeled spam when it’s actually not spam, it is considered a false positive.

Given this, we can see that the results of a classification model on new data can fall into four potential buckets. These include: true positives, false positives (type 1 error), true negatives, and false negatives (type 2 error). In all four cases, true or false refers to whether the actual class matched the predicted class, and positive or negative refers to which classification was assigned to an observation by the model.

Note that false is synonymous with error in this case since the model failed to predict correctly.

Model Performance Overview

Now that we’ve covered residuals and classification result types, we will begin the discussion of model performance metrics that are based on these concepts.

Here is a non-exhaustive list of model evaluation methods, visualizations, and performance metrics that are used in machine learning and predictive analytics. They are categorized by their most common use case, but some may apply to more than one category (e.g., accuracy).

In addition to model evaluation, many of these can also be used for model comparison, selection, and tuning. Many of these are very powerful when combined with the cross-validation technique described earlier in this series.

  • Regression performance
    • R2 and adjusted R2 (aka explained variance)
    • Mean squared error (MSE), or root mean squared error (RMSE)
    • Mean error, or mean absolute error
    • Median error, or median absolute error
  • Classification performance
    • Confusion matrix
    • Precision
    • Recall (aka sensitivity)
    • Specificity
    • Accuracy
    • Lift
    • Area under the ROC curve (AUC)
    • F-score
    • Log-loss
    • Average precision
    • Precision/recall break-even point
    • Root mean squared error (RMSE)
    • Mean cross entropy
    • Probability calibration
  • Bias variance tradeoff and model complexity
    • Validation curve
    • Learning curve
    • Residual sum of squares
    • Goodness-of-fit metrics
  • Model validation and selection
    • Mallow’s Cp
    • Akaike information criterion (AIC)
    • Bayesian information criterion (BIC)

Performance metrics should be chosen based on the problem domain, project goals, and the business objectives. Unfortunately there isn’t a one-size-fits-all approach, and often there are tradeoffs to consider.

While a discussion of all of these methods and metrics is out of scope for this series, we will cover a few key ones next.

Model Performance Evaluation Metrics

Regression

There are many metrics for determining model performance for regression problems, but the most commonly used metric is known as the mean square error (MSE), or variation called the root mean square error (RMSE), which is calculated by taking the square root of the mean squared error. The root mean square error is typically preferred since taking the square root changes the units of the error measurement to be the same and proportional to the response variable’s units.

The error in this case is the difference in value between a given model prediction and its actual value for an out-of-sample observation. The mean squared error is therefore the average of all of the squared errors across all new observations, which is the same as adding all of the squared errors (sum of squares) and dividing by the number of observations.

In addition to being used as a stand-alone performance metric, mean squared error (or RMSE) can also be used for model selection, controlling model complexity, and model tuning. Often many models are created and evaluated (e.g., cross-validation), and then MSE (or similar metric) is plotted on the y-axis, with the tuning or validation parameter given on the x-axis.

The tuning or validation parameter is changed in each model creation and evaluation step, and the plot described above can help determine the ideal tuning parameter value. The number of predictors is a great example of a potential tuning parameter in this case.

Before moving on to classification, it is worth mentioning R2 briefly. R2 is often thought of as a measure of model performance, but it’s actually not. R2 is a measure of the amount of variance explained by the model, and is given as a number between 0 and 1. A value of 1 means the model explains all of the data perfectly, but when computed with training data is more of an indication of potential overfitting than high predictive performance.

As discussed earlier, the more complex the model, the more the model tends to fit the data better and potentially overfit, or contribute to additional model variance. Given this, adjusted R2 is a more robust and reliable metric in that it adjusts for any increases in model complexity (e.g., adding more predictors), so that one can better gauge underlying model improvement in lieu of the increased complexity.

Classification

Recall the different results from a binary classifier, which are true positives, true negatives, false positives, and false negatives. These are often shown in a confusion matrix. Here is a very generalized and comprehensive example of one from Wikipedia, and note that the graphic is shown with concepts and metrics, and not actual data.

And here is an example from Wikipedia with the values filled in30 for different classifier models evaluated against 200 observations. Note the calculation and variation of the metrics across the different models.

A confusion matrix is conceptually the basis of many classification performance metrics as shown. We will discuss a few of the more popular ones associated with machine learning here.

Accuracy is a key measure of performance, and is more specifically the rate at which the model is able to predict the correct value (classification or regression) for a given data point or observation. In other words, accuracy is the proportion of correct predictions out of all predictions made.

The other two metrics from the confusion matrix worth discussing are precision and recall. Precision (positive predictive value) is the ratio of true positives to the total amount of positive predictions made (i.e., true or false). Said another way, precision measures the proportion of accurate positive predictions out of all positive predictions made.

Recall on the other hand, or true positive rate, is the ratio of true positives to the total amount of actual positives, whether predicted correctly or not. So in other words, recall measures the proportion of accurate positive predictions out of all actual positive observations.

A metric that is associated with precision and recall is called the F-score (also called F1 score), which combines them mathematically, and somewhat like a weighted average, in order to produce a single measure of performance based on the simultaneous values of both. Its values range from 0 (worst) to 1 (best).

Another important concept to know about is the receiver operating characteristic, which when plotted, results in what’s known as an ROC curve (shown below, image courtesy of BOR at the English language Wikipedia).

An ROC curve is a two-dimensional plot of sensitivity (recall, or true positive rate) vs specificity (false positive rate). The area under the curve is referred to as the AUC, and is a numeric metric used to represent the quality and performance of the classifier (model).

By BOR at the English language Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10714489

An AUC of 0.5 is essentially the same as random guessing without a model, whereas an AUC of 1.0 is considered a perfect classifier. Generally, the higher the AUC value the better, and an AUC above 0.8 is considered quite good.

The higher the AUC value, the closer the curve gets to the upper left corner of the plot. One can easily see from the ROC curves then that the goal is to find and tune a model that maximizes the true positive rate, while simultaneously minimizing the false positive rate. Said another way, the goal as shown by the ROC curve is to correctly predict as many of the actual positives as possible, while also predicting as many of the actual negatives as possible, and therefore minimize errors (incorrect classifications) for both.

As mentioned previously in this series, model performance can be measured in many ways, and the method used should be chosen based on project goals, business domain considerations, and so on.

It is also worth noting that according to many experts, different performance metrics are thought to be biased for varying reasons. Given the breadth and complexity of this topic, the reader is encouraged to refer to external resources for further information on performance evaluation and the tradeoffs involved.

Error Analysis and Tradeoffs

There are multiple types of errors associated with machine learning and predictive analytics. The primary types are in-sample and out-of-sample errors. In-sample errors (aka resubstitution errors) are the error rate found from the training data, i.e., the data used to build predictive models.

Out-of-sample errors (aka generalization errors) are the error rates found on a new data set, and are the most important since they represent the potential performance of a given predictive model on new and unseen data.

In-sample error rates may be very low and seem to be indicative of a high-performing model, but one must be careful, as this may be due to overfitting as mentioned, which would result in a model that is unable to generalize well to new data.

Training and validation data is used to build, validate, and tune a model, but test data is used to evaluate model performance and generalization capability. One very important point to note is that prediction performance and error analysis should only be done on test data, when evaluating a model for use on non-training or new data (out-of-sample).

Generally speaking, model performance on training data tends to be optimistic, and therefore data errors will be less than those involving test data. There are tradeoffs between the types of errors that a machine learning practitioner must consider and often choose to accept.

For binary classification problems, there are two primary types of errors. Type 1 errors (false positives) and Type 2 errors (false negatives). It’s often possible through model selection and tuning to increase one while decreasing the other, and often one must choose which error type is more acceptable. This can be a major tradeoff consideration depending on the situation.

A typical example of this tradeoff dilemma involves cancer diagnosis, where the positive diagnosis of having cancer is based on some test. In this case, a false positive means that someone is told that have have cancer when they do not. Conversely, the false negative case is when someone is told that they do not have cancer when they actually do.

If no model is perfect, then in the example above, which is the more acceptable error type? In other words, of which one can we accept to a greater degree?

Telling someone they have cancer when they don’t can result in tremendous emotional distress, stress, additional tests and medical costs, and so on. On the other hand, failing to detect cancer in someone that actually has it can mean the difference between life and death.

In the spam or ham case, neither error type is nearly as serious as the cancer case, but typically email vendors err slightly more on the side of letting some spam get into your inbox as opposed to you missing a very important email because the spam classifier is too aggressive.

Summary

In this chapter, we have discussed many concepts and metrics associated with model evaluation, performance, and error analysis.

The fifth and final chapter of this series will revisit unsupervised learning in greater detail, followed by an overview of similar and highly related fields to machine learning. This series will conclude with an overview of machine learning as used in real world applications.

Stay tuned!

By on

About the Author: Alex Castrounis founded InnoArchiTech. Sign up for the InnoArchiTech newsletter and follow InnoArchiTech on Twitter at @innoarchitech for the latest content updates.


References

  1. Wikipedia: Machine Learning
  2. Wikipedia: Supervised Learning
  3. Wikipedia: Unsupervised Learning
  4. Wikipedia: List of machine learning concepts
  5. 3 Ways to Test the Accuracy of Your Predictive Models
  6. Practical Machine Learning Online Course – Johns Hopkins University
  7. Machine Learning Online Course – Stanford University
  8. Statistical Learning Online Course – Stanford University
  9. Wikipedia: Type I and type II errors
  10. Wikipedia: Accuracy Paradox
  11. Wikipedia: Errors and Residuals
  12. Wikipedia: Information Retrieval
  13. Data Mining in Metric Space: An Empirical Analysis of
    Supervised Learning Performance Criteria
  14. Wikipedia: Sensitivity and Specificity
  15. Wikipedia: Accuracy and precision
  16. Wikipedia: Precision and recall
  17. Wikipedia: F1 score
  18. Wikipedia: Residual sum of squares
  19. Wikipedia: Cohen’s kappa
  20. Wikipedia: Learning Curve
  21. Wikipedia: Coefficient of determination, aka R2
  22. Wikipedia: Mallows’s Cp
  23. Wikipedia: Bayesian information criterion
  24. Wikipedia: Akaike information criterion
  25. Wikipedia: Root-mean-square deviation
  26. Wikipedia: Knowledge Extraction
  27. Wikipedia: Data Mining
  28. Wikipedia: Confusion Matrix
  29. Simple guide to confusion matrix terminology
  30. Wikipedia: Receiver operation characteristic

机器学习——深度非技术指南 (三)

By Alex Castrounis

Source: http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide-part-3/

Chapters

  1. Overview, goals, learning types, and algorithms
  2. Data selection, preparation, and modeling
  3. Model evaluation, validation, complexity, and improvement
  4. Model performance and error analysis
  5. Unsupervised learning, related fields, and machine learning in practice

 

Introduction

Welcome to the third chapter in a five-part series about machine learning.

In this chapter, we’ll continue our machine learning discussion, and focus on problems associated with overfitting data, as well as controlling model complexity, a model evaluation and errors introduction, model validation and tuning, and improving model performance.

Overfitting

Overfitting is one of the greatest concerns in predictive analytics and machine learning. Overfitting refers to a situation where the model chosen to fit the training data fits too well, and essentially captures all of the noise, outliers, and so on.

The consequence of this is that the model will fit the training data very well, but will not accurately predict cases not represented by the training data, and therefore will not generalize well to unseen data. This means that the model performance will be better with the training data than with the test data.

A model is said to have high variance when it leans more towards overfitting, and conversely has high bias when it doesn’t fit the data well enough. A high variance model will tend to be quite flexible and overly complex, while a high bias model will tend to be very opinionated and overly simplified. A good example of a high bias model is fitting a straight line to very nonlinear data.

In both cases, the model will not make very accurate predictions on new data. The ideal situation is to find a model that is not overly biased, nor does it have a high variance. Finding this balance is one of the key skills of a data scientist.

Overfitting can occur for many reasons. A common one is that the training data consists of many features relative to the number of observations or data points. In this case, the data is relatively wide as compared to long.

To address this problem, reducing the number of features can help, or finding more data if possible. The downside to reducing features is that you lose potentially valuable information.

Another option is to use a technique called regularization, which will be discussed later in this series.

Controlling Model Complexity

Model complexity can be characterized by many things, and is a bit subjective. In machine learning, model complexity often refers to the number of features or terms included in a given predictive model, as well as whether the chosen model is linear, nonlinear, and so on. It can also refer to the algorithmic learning complexity or computational complexity.

Overly complex models are less easily interpreted, at greater risk of overfitting, and will likely be more computationally expensive.

There are some really sophisticated and automated methods by which to control, and ultimately reduce model complexity, as well as help prevent overfitting. Some of them are able to help with feature and model selection as well.

These methods include linear model and subset selection, shrinkage methods (including regularization), and dimensionality reduction.

Regularization essentially keeps all features, but reduces (or penalizes) the effect of some features on the model’s predicted values. The reduced effect comes from shrinking the magnitude, and therefore the effect, of some of the model’s term’s coefficients.

The two most popular regularization methods are ridge regression and lasso. Both methods involve adding a tuning parameter (Greek lambda) to the model, which is designed to impose a penalty on each term’s coefficient based on its size, or effect on the model.

The larger the term’s coefficient size, the larger the penalty, which basically means the more the tuning parameter forces the coefficient to be closer to zero. Choosing the value to use for the tuning parameter is critical and can be done using a technique such as cross-validation.

The lasso technique works in a very similar way to ridge regression, but can also be used for feature selection as well. This is due to the fact that the penalty term for each predictor is calculated slightly differently, and can result in certain terms becoming zero since their coefficients can become zero. This essentially removes those terms from the model, and is therefore a form of automatic feature selection.

Ridge regression or lasso techniques may work better for a given situation. Often the lasso works better for data where the response is best modeled as a function of a small number of the predictors, but this isn’t guaranteed. Cross-validation is a great technique for evaluating one technique versus the other.

Given a certain number of predictors (features), there is a calculable number of possible models that can be created with only a subset of the total predictors. An example is when you have 10 predictors, but want to find all possible models using only 2 of the 10 predictors.

Doing this, and then selecting one of the models based on the smallest test error, is known as subset selection, or sometimes as best subset selection. Note that a very useful plot for subset selection is when plotting the residual sum of squares (discussed later) for each model against the number of predictors.

When the number of predictors gets large enough, best subset selection becomes unable to deal with the huge number of possible model combinations for a given subset of predictors. In this case, another method known as stepwise selection can be used. There are two primary versions, forward and backward stepwise selection.

In forward stepwise selection, predictors are added to the model one at a time starting at zero predictors, until all of the predictors are included. Backwards stepwise selection is the opposite, and involves starting with a model including all predictors, and then removing a single predictor at each step.

The model performance is evaluated at each step in both cases. In both subset selection and stepwise selection, the test error is used to determine the best model. There are many ways to estimate test errors, which will be discussed later in this series.

There is a concept that deals with highly dimensional data (i.e., large number of features) known as the curse of dimensionality. The curse of dimensionality refers to the fact that the computational speed and memory required increases exponentially as the number of data dimensions (features) increases.

This can manifest itself as a problem where a machine learning algorithm does not scale well to higher dimensional data11. One way to deal with this issue is to choose a different algorithm that can scale better with the data. The other is a technique known as dimensionality reduction.

Dimensionality Reduction

Dimensionality reduction is a technique used to reduce the number of features included in the machine learning process. It can help reduce complexity, reduce computational cost, and increase machine learning algorithm computational speed. It can be thought of as a technique that transforms the original predictors to a new, smaller set of predictors, which are then used to fit a model.

Principal component analysis (PCA) was discussed previously in the context of feature selection, but is also a widely-used dimensionality reduction technique as well. It helps reduce the number of features (i.e., dimensions) by finding, separating out, and sorting the features that explain the most variance in the data in descending order. Cross-validation is a great way to determine the number of principal components to include in the model.

An example of this would be a dataset where each observation is described by ten features, but only three of the features can describe the majority of the data’s variance, and therefore are adequate enough for creating a model with, and generating accurate predictions.

Note that people sometimes use PCA to prevent overfitting since fewer features implies that the model is less likely to overfit. While PCA may work in this context, it is not a good approach and is therefore not recommended. Regularization should be used to address overfitting concerns instead8.

Model Evaluation and Performance

Assuming you are working with high quality, unbiased, and representative data, then the next most important aspects of predictive analytics and machine learning is measuring model performance, possibly improving it if needed, and understanding potential errors that are often encountered.

We will have an introductory discussion here about model performance, improvement, and errors, but will continue with much greater detail on these topics in the next chapter.

Model performance is typically used to describe how well a model is able to make predictions on unseen data (e.g., test, but NOT training data), and there are multiple methods and metrics used to assess and gauge model performance. A key measure of model performance is to estimate the model’s test error.

The test error can be estimated either indirectly or directly. It can estimated and adjusted indirectly by making changes that affect the training error, since the training error is a measure of overfitting (bias and/or variance) to some extent.

Recall that the more the model overfits the data (high variance), the less well the model will generalize to unseen data. Given that, the assumption is that reducing variance should improve the test error as well.

The test error can also be estimated directly by testing the model with the held out test data, and usually works best in conjunction with a resampling method such as cross-validation, which we’ll discuss later.

Estimating a model’s test error not only helps determine a model’s performance and accuracy, but is also a very powerful way to select a model too.

Improving Model Performance and Ensemble Learning

There are many ways to improve a model’s performance. The quality and quantity of data used has a huge, if not the biggest impact on model performance, but sometimes these two can’t easily be changed.

Other major influencers on model performance include algorithm tuning, feature engineering, cross-validation, and ensemble methods.

Algorithm tuning refers to the process of tweaking certain values that effectively initialize and control how a machine learning algorithm learns and generates predictive models. This tuning can be used to improve performance using the separate validation data set, and later performance tested with the test dataset.

Since most algorithm tuning parameters are algorithm-specific and sometimes very complex, a detailed discussion is out of scope for this article, but note that the lambda parameter described for regularization is one such tuning parameter.

Ensemble learning, as mentioned in an earlier post, deals with combining or averaging (regression) the results from multiple learning models in order to improve predictive performance. In some cases (classification), ensemble methods can be thought of as a voting process where the majority vote wins.

Two of the most common ensemble methods are bagging (aka bootstrap aggregating) and boosting. Both are helpful with improving model performance and in reducing variance (overfitting) and bias (underfitting).

Bagging is a technique by which the training data is sampled with replacement multiple times. Each time a new training data set is created and a model is fitted to the sample data. The models are then combined to produce the overall model output, which can be used to measure model performance.

Boosting is a technique designed to transform a set of so-called weak learners into a single strong learner. In plain English, think of a weak learner as a model that predicts only slightly better than random guessing, and a strong learner as a model that can predict to certain degree of accuracy better than random guessing.

While complicated, boosting basically works by iteratively creating weak models and adding them to the single strong learner. While this process happens, model accuracy is tested and then weightings are applied so that future learners focus on improving model performance for cases that were previously not well predicted.

Another very popular ensemble method is known as random forests. Random forests are essentially the combination of decision trees and bagging.

Kaggle is arguably the world’s most prestigious data science competition platform, and features competitions that are created and sponsored by most of the notable Silicon Valley tech companies, as well as by other very well-known corporations. Ensemble methods such as random forests and boosting have enjoyed very high success rates in winning these competitions.

Model Validation and Resampling Methods

Model validation is a very important part of the machine learning process. Validation methods consist of creating models and testing them on a validation dataset.

Resulting validation-set error provides an estimate of the test error and is typically assessed using mean squared error (MSE) in the case of a quantitative response, and misclassification rate in the case of a qualitative (discrete) response.

Many validation techniques are categorized as resampling methods, which involve refitting models to different samples formed from a set of training data.

Probably the most popular and noteworthy technique is called cross-validation. The key idea of cross-validation is that the model’s accuracy on the training set is optimistic, and that a better estimate comes from the model’s accuracy on the test set. The idea then is to estimate the test set accuracy while in the model training stage.

The process involves repeated splitting of the data into different training and test sets, building the model on the training set, and then evaluating it on the test set, and finally repeating and averaging the estimated errors.

In addition to model validation and helping to prevent overfitting, cross-validation can be used for feature selection, model selection, model parameter tuning, and comparing different predictors.

A popular special case of cross-validation is known as k-fold cross-validation. This technique involves selecting a number k, which represents the number of partitions of equal size that the original data is divided into. Once divided, a single partition is designated as a validation dataset (i.e., for testing the model), and the remaining k-1 data partitions are used as training data.

Note that typically the larger the chosen k, the less bias, but more variance, and vice versa. In the case of cross-validation, random sampling is done without replacement.

There is another technique that involves random sampling with replacement that is known as the bootstrap. The bootstrap technique tends to underestimate the error more than cross-validation.

Another special case is when k=n, i.e., when k equals the number of observations. In this case, the technique is known as leave-one-out cross-validation (LOOCV).

Summary

In this chapter, we have discussed many concepts and techniques associated with model evaluation, validation, complexity, and improvement.

Chapter four of this series will provide a much deeper dive into concepts and metrics related to model performance evaluation and error analysis.

Stay tuned!

By on

About the Author: Alex Castrounis founded InnoArchiTech. Sign up for the InnoArchiTech newsletter and follow InnoArchiTech on Twitter at @innoarchitech for the latest content updates.

References

  1. Wikipedia: Machine Learning
  2. Wikipedia: Supervised Learning
  3. Wikipedia: Unsupervised Learning
  4. Wikipedia: List of machine learning concepts
  5. Wikipedia: Feature Selection
  6. Wikipedia: Cross-validation
  7. Practical Machine Learning Online Course – Johns Hopkins University
  8. Machine Learning Online Course – Stanford University
  9. Statistical Learning Online Course – Stanford University
  10. Wikipedia: Regularization
  11. Wikipedia: Curse of dimensionality
  12. Wikipedia: Bagging, aka Bootstrap Aggregating
  13. Wikipedia: Boosting

机器学习——深度非技术指南 (二)

By Alex Castrounis

Source: http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide-part-2/

Chapters

  1. Overview, goals, learning types, and algorithms
  2. Data selection, preparation, and modeling
  3. Model evaluation, validation, complexity, and improvement
  4. Model performance and error analysis
  5. Unsupervised learning, related fields, and machine learning in practice

Introduction

Welcome to the second chapter in a five-part series about machine learning.

In this chapter, we will briefly introduce model performance concepts, and then focus on the following parts of the machine learning process: data selection, preprocessing, feature selection, model selection, and model tradeoff considerations.

Model Performance Introduction

Model performance can be defined in many ways, but in general, it refers to how effectively the model is able to achieve the solution goals for a given problem (e.g., prediction, classification, anomaly detection, recommendation).

Since the goals can differ for each problem, the measure of performance can differ as well. Some common performance measures include accuracy, precision, recall, receiver operator characteristic (ROC), and so on. These will be discussed in much greater detail throughout the rest of this series.

Data Selection and Preprocessing

Some say that garbage in equals garbage out, and this is definitely the case. This basically means that you may have built a predictive model, but it doesn’t matter if the data used to build the model is non-representative, low quality, error ridden, and so on. The quality, amount, preparation, and selection of data is critical to the success of a machine learning solution.

The first step to ensure success is to avoid selection bias. Selection bias occurs when the samples used to produce the model are not fully representative of cases that the model may be used for in the future, particularly with new and unseen data.

Data is typically messy and often consists of missing values, useless values (e.g., NA), outliers, and so on. Prior to modeling and analysis, raw data needs to be parsed, cleaned, transformed, and pre-processed. This is typically referred to a data munging or data wrangling.

For missing data, data is often imputed, which is a technique used to fill in, or substitute for missing values, and is very similar conceptually to interpolation.

In addition, sometimes feature values are scaled (feature scaling) and/or standardized (normalized). The most typical method of standardizing feature data is to subtract the mean across a given feature’s values from each individual observation value, and then divide by the standard deviation of that feature’s values.

Feature scaling is used to bring the different feature’s value ranges into similarity in order to help prevent certain features from dominating models and predictions, but also to prevent computing problems when running machine learning optimization algorithms (speed, convergence, etc.).

Another preprocessing technique is to create dummy variables, which basically means that you convert qualitative variables to quantitative variables. An example is taking a color feature (e.g., green, red, and blue), and transforming it to the values 1, 2, and 3 respectively. This makes it possible to perform regression with qualitative features.

Data Splitting

Recall from chapter 1 that the data used for machine learning should be split into training and test datasets, as well as an optional third validation dataset for model validation and tuning.

Choosing the size of each data set can be somewhat subjective and dependent on the overall sample size, and a full discussion is out of scope for this series. As an example however, given a training and test dataset only, some people may split the data into 80% training and 20% testing.

In general, more training data results in a better model and potential performance, and more testing data results in a greater evaluation of model performance and overall generalization capability.

Feature Selection and Feature Engineering

Once you have a representative, unbiased, cleaned, and fully prepared dataset, typical next steps include feature selection and feature engineering of the training data. Note that although discussed here, both of these techniques can also be used later in the process for improving model performance.

Feature selection is the process of selecting a subset of features from which to build a predictive regression model or classifier. This is usually done for model simplification and increased interpretability, reducing training times and computational cost, and to help reduce the risk of overfitting, and thus improve model generalization.

Basic techniques for feature selection, particularly for regression problems, involve estimates of model parameters (i.e., model coefficients) and their significance, and correlation estimates amongst features. This will be discussed further in a section about parametric models.

Some advanced techniques used for feature selection are principle component analysis (PCA), singular value decomposition (SVD), and Linear Discriminant Analysis (LDA).

Principal component analysis is a statistical technique that deals with determining which features, in order, represent the most to least variance in the data. Singular value decomposition is a lower level linear algebra algorithm that is used by PCA.

Linear discriminant analysis is closely related to PCA in that they’re both linear transformation techniques. PCA however is more general and is not concerned with class labels (unsupervised), whereas LDA is more specific and is concerned with class labels (supervised).

Feature engineering includes feature selection as a sub-category, but also involves other aspects such as creating new features, transforming raw data into domain-specific and interpretable features, and so on.

Parametric Models and Feature Selection

Many machine learning models are a type of parametric model. A good example is the equation describing a line (i.e., linear model), which is shown here9, and includes the slope (β), intercept coefficient (α), and an error term (ε).

With parametric models, the coefficients of the terms are called the parameters, and are usually designated by the Greek letter beta and a subscript (e.g., β1 … βn). In regression problems, the parameters are called regression coefficients.

Many models also include an error term, indicated by the Greek letter epsilon. Simply stated, this error term is meant to account for the difference between the model’s predicted value and the actual observed value for a given set of input values.

Understanding the concept of model parameters is very important for supervised learning because machine learning differs from other techniques, in that it learns model parameters automatically. It does this by estimating the optimal set of model parameters that best explains the relationship between the response variable and the independent feature variables through optimization techniques, as discussed in chapter one.

In regression problems, a p-value is assigned to each of the estimated model parameters (regression coefficients), and this value is used to indicate the potential predictive influence that each coefficient has on the response.

Coefficients with a p-value greater than some chosen threshold, typically 0.05 or 0.10, are often not included in the model since they will most likely not help explain (predict) the response. This is one key way to perform feature selection with parametric models.

Another technique involves estimating the correlation of the features with respect to the response, and removing redundant and highly correlated features. The idea is that including only one of a pair of correlated features (the most significant) should be enough to explain the impact of both of the correlated features on the response.

Model Selection

While the algorithm or model that you choose may not matter as much as other things discussed in this series (e.g., amount of data, feature selection, etc.), here is a list of things to take into account when choosing a model.

  • Interpretability
  • Simplicity (aka parsimony)
  • Accuracy
  • Speed (training, testing, and real-time processing)
  • Scalability

A good approach is to start with simple models and then increase model complexity as needed, and only when necessary. Generally, simplicity should be preferred unless you can achieve major accuracy gains through model selection.

Relatively simple models include simple and multiple linear regression for regression problems, and logistic and multinomial regression for classification problems.

A basic early model selection choice for supervised learning is whether to use a linear or nonlinear model. Nonlinear models best describe and predict situations when the effects on the response from certain feature values and their combination is nonlinear. Most of the time however, relationships are never truly linear.

Beyond basic linear models, variations in the response variable can also be due to interaction effects, which means that the response is dependent not only on certain individual features (main effects), but also on the combination of certain features (interaction effects). This combination of features in a model is represented by multiplying the feature values for each interaction term in the model (e.g., βx1x2) with a term coefficient.

Once interaction terms are included, the significance of the interactions in explaining the response, and whether to include them, can be determined through the usual methods such as p-value estimation. Note that there is a concept known as the hierarchy principle, which basically says that if an interaction is included in a model, the associated main effects should also be included.

While linear assumptions are often good enough and can produce adequate results, most real life feature/response relationships are nonlinear, and sometimes nonlinear models are required to get an acceptable level of accuracy. In this case, there are a wide variety of models to choose from.

Nonlinear models can include different degree polynomials, step functions, piecewise polynomials, splines, local regression (aka LOESS models), and generalized additive models (GAM). Due to the technical nature of nonlinear modeling, familiarity with the above model approaches by name should suffice for the purpose of this series.

Other notable model choices include decision trees, support vector machines (SVM), and artificial neural networks (modeled after biological neural networks, an interconnected system of neurons). Decision trees can be highly interpretable, while the latter two are black box and very complex technical methods. Decision trees involve creating a series of splits based on logical decisions, starting from the most important top-level node. Decision trees visually look like an upside down tree.

Here is an example of a decision tree created by Stephen Milborrow, which shows survival of passengers on board the Titanic. The term ‘sibsp’ is the number of spouses or siblings aboard, and the numbers under each leaf refer to the probability of survival and the percentage of the total observations (i.e., people on board). So the upper right leaf indicates that females that survived had a 73% chance of survival and represented 36% of those on board.

By Stephen Milborrow (Own work) CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html), via Wikimedia Commons

The final model selection decision discussed here is whether to leverage ensemble methods for additional performance gains. These methods combine models to produce a single consensus prediction or classification, and do so through averaging or voting techniques.

Some very common ensemble methods are bagging, boosting, and random forests. Random forests are essentially bagging applied to decision trees, with the additional element of random feature subset selection. Further discussion of these methods is out of scope of this series.

Model Tradeoffs

Model accuracy is determined in many ways, and will be discussed in detail later in this series. The primary measure of model accuracy comes from estimating the test error for a given model. The accuracy improvement goal of model selection is therefore to reduce the estimated test error.

It is important to note that the goal isn’t to find the absolute minimal error, but rather to find the simplest model that performs well enough. There are usually diminishing returns in trying the squeeze out the very last bit of performance. Given this, your choice of modeling approach won’t always be based on the one that results in the greatest degree of accuracy. Sometimes there are other important factors that must be taken into account as well, including interpretability, simplicity, speed, and scalability.

Often, it’s a tradeoff choosing whether prediction accuracy or model interpretability is more important for a given application. Artificial neural networks, support vector machines, and some ensemble methods can be used to create very accurate predictive models, but are very much of a black box except to highly specialized and technical individuals.

Black box algorithms may be preferred when predictive performance is the most important goal, and it’s not necessary to explain how the model works and makes predictions. In some cases however, model interpretability is preferred, and sometimes legally mandatory.

Here is an interpretability-driven example often seen in the financial industry. Suppose a machine learning algorithm is used to accept or reject an individual’s credit card application. If the applicant is rejected and decides to file a complaint or take legal action, the financial institution will need to explain how that decision was made. While that can be nearly impossible for a neural network or SVM system, it’s relatively straightforward for decision tree-based algorithms.

In terms of training, testing, processing, and prediction speed, some algorithms and model types take more time, and require greater computing power and memory than others. In some applications, speed and scalability are critical factors, particularly in any widely used, near real-time application (e.g., eCommerce site) where a model needs to be updated fairly regularly, and that performs predictions and/or classifications at scale on the fly.

Lastly, and as previously mentioned, model simplicity (or parsimony) should always be preferred unless there is a significant and justifiable gain in performance accuracy. Simplicity usually results in quicker, more scalable, and easier to interpret models and results.

Summary

We’ve now had a solid overview of the machine learning process from selecting data and features, through selecting appropriate models for a given problem type.

Chapter three of this series will continue with the machine learning process, and in particular will focus on model evaluation, performance, improvement, complexity, validation, and more.

Stay tuned!

By on

About the Author: Alex Castrounis founded InnoArchiTech. Sign up for the InnoArchiTech newsletter and follow InnoArchiTech on Twitter at @innoarchitech for the latest content updates.

References

  1. Wikipedia: Machine Learning
  2. Wikipedia: Supervised Learning
  3. Wikipedia: Unsupervised Learning
  4. Wikipedia: List of machine learning concepts
  5. Wikipedia: Feature Selection
  6. Practical Machine Learning Online Course – Johns Hopkins University
  7. Machine Learning Online Course – Stanford University
  8. Statistical Learning Online Course – Stanford University
  9. Wikipedia: Simple Linear Regression
  10. Stephen Milborrow (Own work)

机器学习——深度非技术指南 (一)

Source: http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/

By Alex Castrounis

Chapters

  1. Overview, goals, learning types, and algorithms
  2. Data selection, preparation, and modeling
  3. Model evaluation, validation, complexity, and improvement
  4. Model performance and error analysis
  5. Unsupervised learning, related fields, and machine learning in practice

Introduction

Welcome! This is the first chapter of a five-part series about machine learning.

Machine learning is a very hot topic for many key reasons, and because it provides the ability to automatically obtain deep insights, recognize unknown patterns, and create high performing predictive models from data, all without requiring explicit programming instructions.

Despite the popularity of the subject, machine learning’s true purpose and details are not well understood, except by very technical folks and/or data scientists.

This series is intended to be a comprehensive, in-depth, and non-technical guide to machine learning, and should be useful to everyone from business executives to machine learning practitioners. It covers virtually all aspects of machine learning (and many related fields) at a high level, and should serve as a sufficient introduction or reference to the terminology, concepts, tools, considerations, and techniques of the field.

This high level understanding is critical if ever involved in a decision-making process surrounding the usage of machine learning, how it can help achieve business and project goals, which machine learning techniques to use, potential pitfalls, and how to interpret the results.

Note that most of the topics discussed in this series are also directly applicable to fields such as predictive analytics, data mining, statistical learning, artificial intelligence, and so on.

Machine Learning Defined

The oft quoted and widely accepted formal definition of machine learning as stated by field pioneer Tom M. Mitchell is:

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E

The following is my less formal way to describe machine learning.

Machine learning is a subfield of computer science, but is often also referred to as predictive analytics, or predictive modeling1. Its goal and usage is to build new and/or leverage existing algorithms to learn from data, in order to build generalizable models that give accurate predictions, or to find patterns, particularly with new and unseen similar data.

Machine Learning Process Overview

Imagine a dataset as a table, where the rows are each observation (aka measurement, data point, etc), and the columns for each observation represent the features of that observation and their values.

At the outset of a machine learning project, a dataset is usually split into two or three subsets. The minimum subsets are the training and test datasets, and often an optional third validation dataset is created as well.

Once these data subsets are created from the primary dataset, a predictive model or classifier is trained using the training data, and then the model’s predictive accuracy is determined using the test data.

As mentioned, machine learning leverages algorithms to automatically model and find patterns in data, usually with the goal of predicting some target output or response. These algorithms are heavily based on statistics and mathematical optimization.

Optimization is the process of finding the smallest or largest value (minima or maxima) of a function, often referred to as a loss, or cost function in the minimization case10. One of the most popular optimization algorithms used in machine learning is called gradient descent, and another is known as the the normal equation.

In a nutshell, machine learning is all about automatically learning a highly accurate predictive or classifier model, or finding unknown patterns in data, by leveraging learning algorithms and optimization techniques.

Types of Learning

The primary categories of machine learning are supervised, unsupervised, and semi-supervised learning. We will focus on the first two in this article.

In supervised learning, the data contains the response variable (label) being modeled, and with the goal being that you would like to predict the value or class of the unseen data. Unsupervised learning involves learning from a dataset that has no label or response variable, and is therefore more about finding patterns than prediction.

As i’m a huge NFL and Chicago Bears fan, my team will help exemplify these types of learning! Suppose you have a ton of Chicago Bears data and stats dating from when the team became a chartered member of the NFL (1920) until the present (2016).

Imagine that each row of the data is essentially a team snapshot (or observation) of relevant statistics for every game since 1920. The columns in this case, and the data contained in each, represent the features (values) of the data, and may include feature data such as game date, game opponent, season wins, season losses, season ending divisional position, post-season berth (Y/N), post-season stats, and perhaps stats specific to the three phases of the game: offense, defense, and special teams.

In the supervised case, your goal may be to use this data to predict if the Bears will win or lose against a certain team during a given game, and at a given field (home or away). Keep in mind that anything can happen in football in terms of pre and game-time injuries, weather conditions, bad referee calls, and so on, so take this simply as an example of an application of supervised learning with a yes or no response (prediction), as opposed to determining the probability or likelihood of ‘Da Bears’ getting the win.

Since you have historic data of wins and losses (the response) against certain teams at certain football fields, you can leverage supervised learning to create a model to make that prediction.

Now suppose that your goal is to find patterns in the historic data and learn something that you don’t already know, or group the team in certain ways throughout history. To do so, you run an unsupervised machine learning algorithm that clusters (groups) the data automatically, and then analyze the clustering results.

With a bit of analysis, one may find that these automatically generated clusters seemingly groups the team into the following example categories over time:

  • Strong defense, weak running offense, strong passing offense, weak special teams, playoff berth
  • Strong defense, strong running offense, weak passing offense, average special teams, playoff berth
  • Weak defense, strong all-around offense, strong special teams, missed the playoffs
  • and so on

An example of unsupervised cluster analysis would be to find a potential reason why they missed the playoffs in the third cluster above. Perhaps due to the weak defense? Bears have traditionally been a strong defensive team, and some say that defense wins championships. Just saying…

In either case, each of the above classifications may be found to relate to a certain time frame, which one would expect. Perhaps the team was characterized by one of these groupings more than once throughout their history, and for differing periods of time.

To characterize the team in this way without machine learning techniques, one would have to pour through all historic data and stats, manually find the patterns and assign the classifications (clusters) for every year taking all data into account, and compile the information. That would definitely not be a quick and easy task.

Alternatively, you could write an explicitly coded program to pour through the data, and that has to know what team stats to consider, what thresholds to take into account for each stat, and so forth. It would take a substantial amount of time to write the code, and different programs would need to be written for every problem needing an answer.

Or… you can employ a machine learning algorithm to do all of this automatically for you in a few seconds.

Machine Learning Goals and Outputs

Machine learning algorithms are used primarily for the following types of output:

  • Clustering (Unsupervised)
  • Two-class and multi-class classification (Supervised)
  • Regression: Univariate, Multivariate, etc. (Supervised)
  • Anomaly detection (Unsupervised and Supervised)
  • Recommendation systems (aka recommendation engine)

Specific algorithms that are used for each output type are discussed in the next section, but first, let’s give a general overview of each of the above output, or problem types.

As discussed, clustering is an unsupervised technique for discovering the composition and structure of a given set of data. It is a process of clumping data into clusters to see what groupings emerge, if any. Each cluster is characterized by a contained set of data points, and a cluster centroid. The cluster centroid is basically the mean (average) of all of the data points that the cluster contains, across all features.

Classification problems involve placing a data point (aka observation) into a pre-defined class or category. Sometimes classification problems simply assign a class to an observation, and in other cases the goal is to estimate the probabilities that an observation belongs to each of the given classes.

A great example of a two-class classification is assigning the class of Spam or Ham to an incoming email, where ham just means ‘not spam’. Multi-class classification just means more than two possible classes. So in the spam example, perhaps a third class would be ‘Unknown’.

Regression is just a fancy word for saying that a model will assign a continuous value (response) to a data observation, as opposed to a discrete class. A great example of this would be predicting the closing price of the Dow Jones Industrial Average on any given day. This value could be any number, and would therefore be a perfect candidate for regression.

Note that sometimes the word regression is used in the name of an algorithm that is actually used for classification problems, or to predict a discrete categorical response (e.g., spam or ham). A good example is logistic regression, which predicts probabilities of a given discrete value.

Another problem type is anomaly detection. While we’d love to think that data is well behaved and sensible, unfortunately this is often not the case. Sometimes there are erroneous data points due to malfunctions or errors in measurement, or sometimes due to fraud. Other times it could be that anomalous measurements are indicative of a failing piece of hardware or electronics.

Sometimes anomalies are indicative of a real problem and are not easily explained, such as a manufacturing defect, and in this case, detecting anomalies provides a measure of quality control, as well as insight into whether steps taken to reduce defects have worked or not. In either case, there are times where it is beneficial to find these anomalous values, and certain machine learning algorithms can be used to do just that.

The final type of problem is addressed with a recommendation system, or also called recommendation engine. Recommendation systems are a type of information filtering system, and are intended to make recommendations in many applications, including movies, music, books, restaurants, articles, products, and so on. The two most common approaches are content-based and collaborative filtering.

Two great examples of popular recommendation engines are those offered by Netflix and Amazon. Netflix makes recommendations in order to keep viewers engaged and supplied with plenty of content to watch. In other words, to keep people using Netflix. They do this with their “Because you watched …”, “Top Picks for Alex”, and “Suggestions for you” recommendations.

Amazon does a similar thing in order to increase sales through up-selling, maintain sales through user engagement, and so on. They do this through their “Customers Who Bought This Item Also Bought”, “Recommendations for You, Alex”, “Related to Items You Viewed”, and “More Items to Consider” recommendations.

Machine Learning Algorithms

We’ve now covered the machine learning problem types and desired outputs. Now we will give a high level overview of relevant machine learning algorithms.

Here is a list of algorithms, both supervised and unsupervised, that are very popular and worth knowing about at a high level. Note that some of these algorithms will be discussed in greater depth later in this series.

Supervised Regression

  • Simple and multiple linear regression
  • Decision tree or forest regression
  • Artificial Neural networks
  • Ordinal regression
  • Poisson regression
  • Nearest neighbor methods (e.g., k-NN or k-Nearest Neighbors)

Supervised Two-class & Multi-class Classification

  • Logistic regression and multinomial regression
  • Artificial Neural networks
  • Decision tree, forest, and jungles
  • SVM (support vector machine)
  • Perceptron methods
  • Bayesian classifiers (e.g., Naive Bayes)
  • Nearest neighbor methods (e.g., k-NN or k-Nearest Neighbors)
  • One versus all multiclass

Unsupervised

  • K-means clustering
  • Hierarchical clustering

Anomaly Detection

  • Support vector machine (one class)
  • PCA (Principle component analysis)

Note that a technique that’s often used to improve model performance is to combine the results of multiple models. This approach leverages what’s known as ensemble methods, and random forests are a great example (discussed later).

If nothing else, it’s a good idea to at least familiarize yourself with the names of these popular algorithms, and have a basic idea as to the type of machine learning problem and output that they may be well suited for.

Summary

Machine learning, predictive analytics, and other related topics are very exciting and powerful fields.

While these topics can be very technical, many of the concepts involved are relatively simple to understand at a high level. In many cases, a simple understanding is all that’s required to have discussions based on machine learning problems, projects, techniques, and so on.

Chapter two of this series will provide an introduction to model performance, cover the machine learning process, and discuss model selection and associated tradeoffs in detail.

Stay tuned!

By on

About the Author: Alex Castrounis founded InnoArchiTech. Sign up for the InnoArchiTech newsletter and follow InnoArchiTech on Twitter at @innoarchitech for the latest content updates.


References

  1. Wikipedia: Machine Learning
  2. Wikipedia: Supervised Learning
  3. Wikipedia: Unsupervised Learning
  4. Wikipedia: List of machine learning concepts
  5. A Tour of Machine Learning Algorithms – Machine Learning Mastery
  6. Common Machine Learning Algorithms – Analytics Vidhya
  7. A Tour of Machine Learning Algorithms – Data Science Central
  8. How to choose algorithms for Microsoft Azure Machine Learning
  9. Wikipedia: Gradient Descent
  10. Wikipedia: Loss Function
  11. Wikipedia: Recommender System

Scrape Google Scholar

Source: http://lernpython.de/scrape-google-scholar

Google Scholar is a useful application. It refers every publications to its authors and allows to access easily the scientific output of every researcher. Two import key indicators are the number of citations and the H-Index. In this short python script you will see, how to extract/scrape these two parameters in Python.

hindex VS citations scrape Google Scholar

To scrape Google Scholar we first load important libraries for this task and define a function, which is able to scrape the H-Index from a Google Scholar profile as long as we feed the function with the link to this profile. If this is the case the function returns the H-index.

Use Scholarly to scrape Google Scholar

In the next step we use the Python module scholarly. Is has several feature. the most important is that it can search the Google Scholar database for names and return their number of citation or the direct link to the Google profile. Hence, we give this function a list of scientist in the field of nanopores and use it to get the number of citations and link to the Google Scholar profile. This link is then fed to the previously defined function to return the H-index.

We save the H-Index, number of citation and researcher name into one list and plot the two integer parameters in a plot.

The result is a plott with the number of citations on the X-axis and the H-Index on the Y-axis. From these we can deduce that with increasing number of citations the H-Index grows too. Publications analysing citations behavior in more detail can be found here.

hindex VS citations scrape Google Scholar

利用Python,四步掌握机器学习

本文由 伯乐在线J.F. 翻译,renlytime 校稿

为了理解和应用机器学习技术,你需要学习 Python 或者 R。这两者 都是与 C、Java、PHP 相类似的编程语言。但是,因为 Python 与 R 都比较年轻,而且更加“远离”CPU,所以它们显得简单一些。相对 于R 只用于处理数据,使用例如机器学习、统计算法和漂亮的绘图分析数据, Pthon 的优势在于它适用于许多其他的问题。因为 Python 拥有更 广阔的分布(使用 Jango 托管网站,自然语言处理 NLP,访问 Twitter、Linkedin 等网站的 API),同时类似于更多的传统语 言,比如 C python 就比较流行。

在Python中学习机器学习的四个步骤

1、首先你要使用书籍、课程、视频来学习 Python 的基础知识

2、然后你必需掌握不同的模块,比如 Pandas、Numpy、Matplotlib、NLP (自然语言处理),来处理、清理、绘图和理解数据。

3、接着你必需能够从网页抓取数据,无论是通过网站API,还是网页抓取模块Beautiful Soap。通过网页抓取可以收集数据,应用于机器学习算法。

4、最后一步,你必需学习机器学习工具,比如 Scikit-Learn,或者在抓取的数据中执行机器学习算法(ML-algorithm)。

1.Python入门指南:

有一个简单而快速学习Python的方法,是在 codecademy.com  注册,然后开始编程,并学习 Python 基础知识。另一个学习Python的经典方法是通过 learnpythonthehardway ,一个为广大 Python 编程者所推荐的网站。然后还有一个优秀的 PDF, byte of python 。python社团还为初学者准备了一个Python资源列表list of python resources。同时,还有来自 O’Reilley 的书籍 《Think Python》,也可以从这里免费下载 。最后一个资源是 Python 用于计量经济学、统计学和数据分析的介绍:《Introduction to Python for Econometrics, Statistics and Data Analysis 》,其中也包含了 Python 的基础知识。

2.机器学习的重要模块

关于机器学习最重要的模块是:NumPyPandasMatplotlib 和 IPython 。有一本书涵盖了其中一些模块:《Data Analysis with Open Source Tools》 。然后来自于1.的免费书籍《Introduction to Python for Econometrics, Statistics and Data Analysis》,同时也包括 Numpy,Pandas,Matplotlib 和 IPython这几个模块。还有一个资源是 Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython,也包含了一些很重要的模块。以下是其他免费模块的相关链接: Numpy (Numerical PythonNumpy UserguideGuide to NumPy),  Pandas (Pandas, Powerful Python Data Analysis ToolkitPractical Business PythonIntros to Pandas Data Structure)  和  Matplotlib books

其它资源:

3.从网站通过API挖掘和抓取数据

一旦理解了Python的基础知识和最重要的模块,你必需要学习如何从不同的源收集数据。这个技术也被称作网页抓取。传统的源是网站文本,通过API进入twitter或linkedin一类网站得到的文本数据。网页抓取方面的优秀书籍包括:《 Mining the Social Web》 (免费书籍),《Web Scraping with Python》 和《 Web Scraping with Python: Collecting Data from the Modern Web》。

最后这个文本数据必须要转换为数值数据,通过自然语言处理(NLP)技术完成, Natural language processing with Python 和 Natural Language Annotation for Machine Learning 上面有相应的资料。其它的数据包括图片和视频,可以使用计算机图像技术分析: Programming Computer Vision with PythonProgramming Computer Vision with Python: Tools and algorithms for analyzing images  和  Practical Python and OpenCV ,这些是图片分析方面的典型资源。

以下例子中包括可以用基本的Python命令行实现,有教育意义,而且有趣的例子,以及网页抓取技术。

4. Python 中的机器学习

机器学习可以分为四组:分类,聚类,回归和降维。

drop_shadows_background2

“分类”也可以称作监督学习,有助于分类图片,用来识别图片中的特征或脸 型,或者通过用户外形来分类用户,并给他赋不同的分数值。“聚类”发生在无监督学习的情况,允许用户在数据中识别组/集群。“回归”允许通过参数集估算一 个值,可以应用于预测住宅、公寓或汽车的最优价格。

modules, packages and techniques 罗列了 Python、C、Scala、Java、Julia、MATLAB、Go、R 和 Ruby等语言中所有学习机器学习的重要模块、包和技巧。有关Python机器学习的书籍,我特别推荐《Machine learning in action》。尽管有点短,但它很可能是机器学习中的经典,因为它提到了“集体智慧编程时代”:Programming Collective Intelligence。 这两本书帮助你通过抓取数据建立机器学习。最近关于机器学习的出版物大多都是基于模块 scikit-learn 。由于所有的算法在模块中都已实现,使 得机器学习非常简单。你唯一要做的事就是告诉 Python ,应该使用哪一个机器学习技巧 (ML-technique) 来分析数据。

免费的 scikit-learn教程 可以在 scikit-learn 官方网站上找到。其他的帖子可以通过以下链接获取:

关于机器学习和 Python 中模块 scikit-learn 的书籍:

接下来数月将要发行的书籍包括:

机器学习相关的课程和博客

你想要得到一个学位,加入在线课程,或者参加线下讲习班、大本营或大学课程么?这里有一些关于逻辑分析、大数据、数据挖掘和数据科学的在线教育站点链接:Collection of links 。另外推荐一些在线课程–来自Udacity的Coursera 课程:machine learning  和 Data Analyst Nanodegree。还有一些关于机器学习的博客列表:List of frequently updated blogs

最后是来自 Jake Vanderplas 和 Olivier Grisel,关于探索机器学习的优秀 youtube 视频课程

机器学习理论

想要学习机器学习的理论?那么,《The Elements of statistical Learning》和《 Introduction to Statistical Learning》 是常常被引用的经典。然后还有另外两本书籍:《Introduction to machine learning 》和《 A Course in Machine Learning》。这些链接包括免费的PDF,你不需要付费!如果不想阅读这些书籍,请观看视频:15 hours theory of machine learning

机器学习:用初等数学解读逻辑回归

2015-11-03 龙心尘、寒小阳

摘自:http://my.csdn.net/longxinchen_ml

为了降低理解难度,本文试图用最基础的初等数学来解读逻辑回归,少用公式,多用图形来直观解释推导公式的现实意义,希望使读者能够对逻辑回归有更直观的理解。

逻辑回归问题的通俗几何描述

逻辑回归处理的是分类问题。我们可以用通俗的几何语言重新表述它:
空间中有两群点,一群是圆点“〇”,一群是叉点“X”。我们希望从空间中选出一个分离边界,将这两群点分开。

注:分离边界的维数与空间的维数相关。如果是二维平面,分离边界就是一条线(一维)。如果是三维空间,分离边界就是一个空间中的面(二维)。如果是一维直线,分离边界就是直线上的某一点。不同维数的空间的理解下文将有专门的论述。

为了简化处理和方便表述,我们做以下4个约定:

  1. 我们先考虑在二维平面下的情况。
  2. 而且,我们假设这两类是线性可分的:即可以找到一条最佳的直线,将两类点分开。
  3. 用离散变量y表示点的类别,y只有两个可能的取值。y=1表示是叉点“X”,y=0表示是是圆点“〇”。
  4. 点的横纵坐标用表示。

于是,现在的问题就变成了:怎么依靠现有这些点的坐标和标签(y),找出分界线的方程。

如何用解析几何的知识找到逻辑回归问题的分界线?
  1. 我们用逆推法的思路:
    假设我们已经找到了这一条线,再寻找这条线的性质是什么。根据这些性质,再来反推这条线的方程。
  2. 这条线有什么性质呢?
    首先,它能把两类点分开来。——好吧,这是废话。( ̄▽ ̄)”
    然后,两类点在这条线的法向量p上的投影的值的正负号不一样,一类点的投影全是正数,另一类点的投影值全是负数

    • 首先,这个性质是非常好,可以用来区分点的不同的类别
    • 而且,我们对法向量进行规范:只考虑延长线通过原点的那个法向量p。这样的话,只要求出法向量p,就可以唯一确认这条分界线,这个分类问题就解决了。


  3. 还有什么方法能将法向量p的性质处理地更好呢?
    因为计算各个点到法向量p投影,需要先知道p的起点的位置,而起点的位置确定起来很麻烦,我们就干脆将法向量平移使其起点落在坐标系的原点,成为新向量p’。因此,所有点到p’的投影也就变化了一个常量。

    假设这个常量为,p’向量的横纵坐标为。空间中任何一个点到p’的投影就是,再加上前面的常量值就是:

看到上面的式子有没有感到很熟悉?这不就是逻辑回归函数中括号里面的部分吗?

就可以根据z的正负号来判断点x的类别了。

从概率角度理解z的含义。

由以上步骤,我们由点x的坐标得到了一个新的特征z,那么:

z的现实意义是什么呢?

首先,我们知道,z可正可负可为零。而且,z的变化范围可以一直到正负无穷大。

z如果大于0,则点x属于y=1的类别。而且z的值越大,说明它距离分界线的距离越大,更可能属于y=1类。

那可否把z理解成点x属于y=1类的概率P(y=1|x) (下文简写成P)呢?显然不够理想,因为概率的范围是0到1的。

但是我们可以将概率P稍稍改造一下:令Q=P/(1-P),期望用Q作为z的现实意义。我们发现,当P的在区间[0,1]变化时,Q在[0,+∞)区间单调递增。函数图像如下(以下图像可以直接在度娘中搜“x/(1-x)”,超快):

但是Q的变化率在[0,+∞)还不够,我们是希望能在(-∞,+∞)区间变化的。而且在P=1/2的时候刚好是0。这样才有足够的解释力。

注:因为P=1/2说明该点属于两个类别的可能性相当,也就是说这个点恰好在分界面上,那它在法向量的投影自然就是0了。

而在P=1/2时,Q=1,距离Q=0还有一段距离。那怎么通过一个函数变换然它等于0呢?有一个天然的函数log,刚好满足这个要求。
于是我们做变换R=log(Q)=log(P/(1-P)),期望用R作为z的现实意义。画出它的函数图像如图:

这个函数在区间[0,1]中可正可负可为零,单调地在(-∞,+∞)变化,而且1/2刚好就是唯一的0值!基本完美满足我们的要求。
回到我们本章最初的问题,

“我们由点x的坐标得到了一个新的特征z,那么z的具体意义是什么呢?”

由此,我们就可以将z理解成x属于y=1类的概率P经过某种变换后对应的值。也就是说,z= log(P/(1-P))。反过来就是P=。图像如下:

这两个函数log(P/(1-P)) 、看起来熟不熟悉?

这就是传说中的logit函数和sigmoid函数!

小小补充一下:

  • 在概率理论中,Q=P/(1-P)的意义叫做赔率(odds)。世界杯赌过球的同学都懂哈。赔率也叫发生比,是事件发生和不发生的概率比。
  • 而z= log(P/(1-P))的意义就是对数赔率或者对数发生比(log-odds)。

于是,我们不光得到了z的现实意义,还得到了z映射到概率P的拟合方程:

有了概率P,我们顺便就可以拿拟合方程P=来判断点x所属的分类:

当P>=1/2的时候,就判断点x属于y=1的类别;当P<1/2,就判断点x属于y=0的类别。

构造代价函数求出参数的值

到目前为止我们就有两个判断某点所属分类的办法,一个是判断z是否大于0,一个是判断g(z)是否大于1/2。
然而这并没有什么X用,

以上的分析都是基于“假设我们已经找到了这条线”的前提得到的,但是最关键的三个参数仍未找到有效的办法求出来。

还有没有其他的性质可供我们利用来求出参数的值?

  • 我们漏了一个关键的性质:这些样本点已经被标注了y=0或者y=1的类别!
  • 我们一方面可以基于z是否大于0或者g(z) 是否大于1/2来判断一个点的类别,另一方又可以依据这些点已经被标注的类别与我们预测的类别的插值来评估我们预测的好坏。
  • 这种衡量我们在某组参数下预估的结果和实际结果差距的函数,就是传说中的代价函数Cost Function。
  • 当代价函数最小的时候,相应的参数就是我们希望的最优解。

由此可见,设计一个好的代价函数,将是我们处理好分类问题的关键。而且不同的代价函数,可能会有不同的结果。因此更需要我们将代价函数设计得解释性强,有现实针对性。

为了衡量“预估结果和实际结果的差距”,我们首先要确定“预估结果”“实际结果”是什么。

  • “实际结果”好确定,就是y=0还是y=1。
  • “预估结果”有两个备选方案,经过上面的分析,我们可以采用z或者g(z)。但是显然g(z)更好,因为g(z)的意义是概率P,刚好在[0,1]范围之间,与实际结果{0,1}很相近,而z的意思是逻辑发生比,范围是整个实数域(-∞,+∞),不太好与y={0,1}进行比较。

接下来是衡量两个结果的“差距”。

  • 我们首先想到的是y-hθ(x)。
    • 但这是当y=1的时候比较好。如果y=0,则y- hθ(x)= – hθ(x)是负数,不太好比较,则采用其绝对值hθ(x)即可。综合表示如下:
    • 但这个函数有个问题:求导不太方便,进而用梯度下降法就不太方便。
    • 因为梯度下降法超出的初等数学的范围,这里就暂且略去不解释了。
  • 于是对上面的代价函数进行了简单的处理,使之便于求导。结果如下:

代价函数确定了,接下来的问题就是机械计算的工作了。常见的方法是用梯度下降法。于是,我们的平面线形可分的问题就可以说是解决了。

从几何变换的角度重新梳理我们刚才的推理过程。

回顾我们的推理过程,我们其实是在不断地将点进行几何坐标变换的过程。

  • 第一步是将分布在整个二维平面的点通过线性投影映射到一维直线中,成为点x(z)
  • 第二步是将分布在整个一维直线的点x(z)通过sigmoid函数映射到一维线段[0,1]中成为点x(g(z))。
  • 第三步是将所有这些点的坐标通过代价函数统一计算成一个值,如果这是最小值,相应的参数就是我们所需要的理想值。

对于简单的非线性可分的问题。
  1. 由以上分析可知。比较关键的是第一步,我们之所以能够这样映射是因为假设我们点集是线性可分的。但是如果分离边界是一个圆呢?考虑以下情况。
  2. 我们仍用逆推法的思路:
    • 通过观察可知,分离边界如果是一个圆比较合理。
    • 假设我们已经找到了这个圆,再寻找这个圆的性质是什么。根据这些性质,再来反推这个圆的方程
  3. 我们可以依据这个性质:
    • 圆内的点到圆心的距离小于半径,圆外的点到圆心的距离大于半径
    • 假设圆的半径为r,空间中任何一个点到原点的距离为
    • ,就可以根据z的正负号来判断点x的类别了
    • 然后令,就可以继续依靠我们之前的逻辑回归的方法来处理和解释问题了。
  4. 从几何变换的角度重新梳理我们刚才的推理过程。
    • 第一步是将分布在整个二维平面的点通过某种方式映射到一维直线中,成为点x(z)
    • 第二步是将分布在整个一维射线的点x(z)通过sigmoid函数映射到一维线段[0,1]中成为点x(g(z))。
    • 第三步是将所有这些点的坐标通过代价函数统一计算成一个值v,如果这是最小值,相应的参数就是我们所需要的理想值。

 

从特征处理的角度重新梳理我们刚才的分析过程

其实,做数据挖掘的过程,也可以理解成做特征处理的过程。我们典型的数据挖掘算法,也就是将一些成熟的特征处理过程给固定化的结果
对于逻辑回归所处理的分类问题,我们已有的特征是这些点的坐标,我们的目标就是判断这些点所属的分类y=0还是y=1。那么最理想的想法就是希望对坐标进行某种函数运算,得到一个(或者一些)新的特征z,基于这个特征z是否大于0来判断该样本所属的分类。

对我们上一节非线性可分问题的推理过程进行进一步抽象,我们的思路其实是:

  • 第一步,将点的坐标通过某种函数运算,得到一个新的类似逻辑发生比的特征
  • 第二步是将特征z通过sigmoid函数得到新的特征
  • 第三步是将所有这些点的特征q通过代价函数统一计算成一个值,如果这是最小值,相应的参数(r)就是我们所需要的理想值。

对于复杂的非线性可分的问题

由以上分析可知。比较关键的是第一步,如何设计转换函数。我们现在开始考虑分离边界是一个极端不规则的曲线的情况。

我们仍用逆推法的思路:

  • 通过观察等先验的知识(或者完全不观察乱猜),我们可以假设分离边界是某种6次曲线(这个曲线方程可以提前假设得非常复杂,对应着各种不同的情况)。
  • 第一步:将点的坐标通过某种函数运算,得到一个新的特征。并假设z是某种程度的逻辑发生比,通过其是否大于0来判断样本所属分类。
  • 第二步:将特征z通过sigmoid函数映射到新的特征
  • 第三步:将所有这些样本的特征q通过逻辑回归的代价函数统一计算成一个值,如果这是最小值,相应的参数就是我们所需要的理想值。相应的,分离边界其实就是方程=0,也就是逻辑发生比为0的情况嘛

多维逻辑回归的问题

以上考虑的问题都是基于在二维平面内进行分类的情况。其实,对于高维度情况的分类也类似。

高维空间的样本,其区别也只是特征坐标更多,比如四维空间的点x的坐标为。但直接运用上文特征处理的视角来分析,不过是对坐标进行参数更多的函数运算得到新的特征并假设z是某种程度的逻辑发生比,通过其是否大于0来判断样本所属分类。

而且,如果是高维线性可分的情况,则可以有更近直观的理解。

  • 如果是三维空间,分离边界就是一个空间中的一个二维平面。两类点在这个二维平面的法向量p上的投影的值的正负号不一样,一类点的投影全是正数,另一类点的投影值全是负数。
  • 如果是高维空间,分离边界就是这个空间中的一个超平面。两类点在这个超平面的法向量p上的投影的值的正负号不一样,一类点的投影全是正数,另一类点的投影值全是负数。
  • 特殊的,如果是一维直线空间,分离边界就是直线上的某一点p。一类点在点p的正方向上,另一类点在点p的负方向上。这些点在直线上的坐标可以天然理解成类似逻辑发生比的情况。可见一维直线空间的分类问题是其他所有高维空间投影到法向量后的结果,是所有逻辑回归问题的基础

多分类逻辑回归的问题

以上考虑的问题都是二分类的问题,基本就是做判断题。但是对于多分类的问题,也就是做选择题,怎么用逻辑回归处理呢?

其基本思路也是二分类,做判断题。

比如你要做一个三选一的问题,有ABC三个选项。首先找到A与BUC(”U”是并集符号)的分离边界。然后再找B与AUC的分离边界,C与AUB的分离边界。

这样就能分别得到属于A、B、C三类的概率,综合比较,就能得出概率最大的那一类了。

总结

本文的分析思路——逆推法

画图,观察数据,看出(猜出)规律,假设规律存在,用数学表达该规律,求出相应数学表达式。
该思路比较典型,是数据挖掘过程中的常见思路。

两个视角:几何变换的视角与特征处理的视角。

  1. 小结:
    • 几何变换的视角:高维空间映射到一维空间 → 一维空间映射到[0,1]区间 → [0,1]区间映射到具体的值,求最优化解
    • 特征处理的视角:特征运算函数求特征单值z → sigmoid函数求概率 → 代价函数求代价评估值,求最优化解
  2. 首先要说明的是,在逻辑回归的问题中,这两个视角是并行的,而不是包含关系。它们是同一个数学过程的两个方面
    • 比如,我们后来处理复杂的非线性可分问题的时候,看似只用的是特征处理的思路。其实,对于复杂的非线性分离边界,也可以映射到高维空间进行线性可分的处理。在SVM中,有时候某些核函数所做的映射与之非常类似。这将在我们接下来的SVM系列文章中有更加详细的说明
  3. 在具体的分析过程中,运用哪种视角都可以,各有优点
    • 比如,作者个人比较倾向几何变换的视角来理解,这方便记忆整个逻辑回归的核心过程,画几张图就够了。相应的信息都浓缩在图像里面,异常清晰。
    • 于此同时,特征处理的视角方便你思考你手上掌握的特征是什么,怎么处理这些特征。这其实的数据挖掘的核心视角。因为随着理论知识和工作经验的积累,越到后面越会发现,当我们已经拿到无偏差、倾向性的数据集,并且做过数据清洗之后,特征处理的过程是整个数据挖掘的核心过程:怎么收集这些特征,怎么识别这些特征,挑选哪些特征,舍去哪些特征,如何评估不同的特征……这些过程都是对你算法结果有决定性影响的极其精妙的精妙部分。这是一个庞大的特征工程,里面的内容非常庞大,我们将在后续的系列文章中专门讨论
    • 总的来说,几何变换视角更加直观具体,特征处理视角更加抽象宏观,在实际分析过程中,掌握着两种视角的内在关系和转换规律,综合分析,将使得你对整个数据挖掘过程有更加丰富和深刻的认识
    • 为了将这两种视角更集中地加以对比,我们专门制作了下面的图表,方便读者查阅。

原文链接:http://blog.csdn.net/longxinchen_ml/article/details/49284391

封面来源:www.taopic.com

作者介绍:

龙心尘和寒小阳:从事机器学习/数据挖掘相关应用工作,热爱机器学习/数据挖掘

『我们是一群热爱机器学习,喜欢交流分享的小伙伴,希望通过“ML学分计划”交流机器学习相关的知识,认识更多的朋友。欢迎大家加入我们的讨论群获取资源资料,交流和分享。』

联系方式:

龙心尘 johnnygong.ml@gmail.com

寒小阳 hanxiaoyang.ml@gmail.com