在上周加拿大温哥华举行的NeurIPS会议上,机器学习成为了中心议题。安全
来自世界范围内约1.3万名研究人员集中探讨了神经科学、如何解释神经网络输出以及人工智能如何帮助解决现实世界中的重大问题等焦点话题。网络
会议期间,谷歌 AI 负责人Jeff Dean接受了媒体VentureBeat的专访,并畅谈了其对于2020年机器学习趋势的相关见解,Jeff Dean认为:架构
2020年,机器学习领域在多任务学习和多模态学习上将会有大突破,同时新出现的设备也会让机器学习模型的做用更好地发挥出来。app
如下截取了部分采访的英文原文,并简要进行了翻译:机器学习
1.谈AI芯片ide
VentureBeat:What do you think are some of the things that in a post-Moore’s Lawworld people are going to have to keep in mind?工具
您认为在后摩尔定律世界中,人们须要牢记哪些事情?布局
Jeff Dean:Well I think one thing that’s been shown to be pretty effective is specialization of chips to do certain kinds of computation that you want to do that are not completely general purpose, like a general-purpose CPU. So we’ve seen a lot of benefit from more restricted computational models, like GPUs or even TPUs, which are more restricted but really designed around what ML computations need to do. And that actually gets you a fair amount of performance advantage, relative to general-purpose CPUs. And so you’re then not getting the great increases we used to get in sort of the general fabrication process improving your year-over-year substantially. But we are getting significant architectural advantages by specialization.post
我认为用专门的芯片而不是用CPU来作非通用的计算,已经被证实很是有效。TPU或者GPU,虽然有诸多限制,但它们是围绕着机器学习计算的须要而设计的,这比通用GPU有更大的性能优点。性能
所以,咱们很难看到过去那种算力的大幅度增加,但咱们正在经过专业化,来得到更大的架构优点。
2.谈机器学习
VentureBeat:You also got a little into the use of machine learning for the creation of machine learning hardware. Can you talk more about that?
对机器学习在建立机器学习硬件方面的应用,您能详细说说吗?
Jeff Dean:Basically, right now in the design process you have design tools that can help do some layout, but you have human placement and routing experts work with those design tools to kind of iterate many, many times over. It’s a multi-week process to actually go from the design you want to actually having it physically laid out on a chip with the right constraints in area and power and wire length and meeting all the design roles or whatever fabrication process you’re doing.
So it turns out that we have early evidence in some of our work that we can use machine learning to do much more automated placement and routing. And we can essentially have a machine learning model that learns to play the game of ASIC placement for a particular chip.
基本上,如今在设计过程当中,一些工具能够帮助布局,但也须要人工布局和布线专家,从而可使用这些设计工具进行屡次重复的迭代。
从你想要的设计开始,到布局在芯片上,并在面积、功率和导线长度方面有适当的限制,同时还要知足全部设计角色或正在执行的任何制造过程,这一般须要花费数周的时间。
因此事实证实,在一些工做中,咱们可使用机器学习来作更多的自动布局和布线。
咱们基本上能够有一个机器学习模型,去针对特定芯片玩ASIC放置的游戏。咱们内部一直在试验的一些芯片上,这也取得了不错的结果。
3.谈谷歌挑战
VentureBeat: What do you feel are some of the technical or ethical challenges for Google in the year ahead?
您认为谷歌在将来一年面临哪些技术或伦理上的挑战?
Jeff Dean:In terms of AI or ML, we’ve done a pretty reasonable job of getting a process in place by which we look at how we’re using machine learning in different product applications and areas consistent with the AI principles. That process has gotten better-tuned and oiled with things like model cards and things like that. I’m really happy to see those kinds of things. So I think those are good and emblematic of what we should be doing as a community.
And then I think in the areas of many of the principles, there [are] real open research directions. Like, we have kind of the best known practices for helping with fairness and bias and machine learning models or safety or privacy. But those are by no means solved problems, so we need to continue to do longer-term research in these areas to progress the state of the art while we currently apply the best known state-of-the-art techniques to what we do in an applied setting.
就AI或机器学习而言,咱们已经完成了一个至关合理的工做,并创建了一个流程。经过该流程,咱们能够了解如何在与AI原理一致的不一样产品应用和领域中使用机器学习。该过程已经获得了更好的调整,并经过模型卡之类的东西进行了优化。
而后,我认为在许多原则领域中,存在真正的开放研究方向,能够帮助咱们解决公平和偏见以及机器学习模型或安全性或隐私问题。可是,咱们须要继续在这些领域中进行长期研究,以提升技术水平,并将最著名的最新技术应用于咱们的工做中。
4.谈人工智能趋势
VentureBeat: What are some of the trends you expect to emerge, or milestones you think may be surpassed in 2020 in AI?
您认为在2020年人工智能领域会出现哪些趋势或里程碑?
Jeff Dean:I think we’ll see much more multitask learning and multimodal learning, of sort of larger scales than has been previously tackled. I think that’ll be pretty interesting.
And I think there’s going to be a continued trend to getting more interesting on-device models — or sort of consumer devices, like phones or whatever — to work more effectively.
I think obviously AI-related principles-related work is going to be important. We’re a big enough research organization that we actually have lots of different thrusts we’re doing, so it’s hard to call out just one. But I think in general [we’ll be] progressing the state of the art, doing basic fundamental research to advance our capabilities in lots of important areas we’re looking at, like NLP or language models or vision or multimodal things. But also then collaborating with our colleagues and product teams to get some of the research that is ready for product application to allow them to build interesting features and products. And [we’ll be] doing kind of new things that Google doesn’t currently have products in but are sort of interesting applications of ML, like the chip design work we’ve been doing.
我认为,在多任务学习和多模态学习方面会有突破,解决更多的问题。我以为那会颇有趣。
并且我认为,将会有愈来愈有效的设备(手机或其余类型的设备)出现,来让模型更有效地发挥做用。
我认为与AI相关的原理工做显然很重要。但对于谷歌来讲,咱们是一个足够大的研究机构,实际上咱们正在作许多不一样的工做,所以很难一一列举。
但总的来讲,咱们将进一步发展最早进的技术,进行基础研究,以提升咱们在许多重要领域的能力,好比NLP、语言模型或多模态的东西。
同时,咱们也会与咱们的同事和产品团队合做,为产品应用作一些研究,使他们可以构建有趣的功能和产品。
英文采访原文连接: