草莓厂家
免费服务热线

Free service

hotline

010-00000000
草莓厂家
热门搜索:
行业资讯
当前位置:首页 > 行业资讯

IanGoodfellow撰文总结谷歌的ICLR2017硕果累累_[#第一枪]

发布时间:2021-06-07 17:19:25 阅读: 来源:草莓厂家

雷锋网消息,谷歌大脑团队的 Ian Goodfellow 今日在研究院官网上撰文,总结了谷歌在 ICLR 2017 上所做的学术贡献。雷锋网编译全文如下,未经许可不得转载。

本周,第五届国际学习表征会议(ICLR 2017)在法国土伦召开,这是一个关注机器学习领域如何从数据中习得具有意义及有用表征的会议。ICLR 包括 conference track 及 workshop track 两个项目,邀请了获得 oral 及 poster 的研究者们进行分享,涵盖深度学习、度量学习、核学习、组合模型、非线性结构化预测,及非凸优化问题。

站在神经网络及深度学习领域浪潮之巅,谷歌关注理论与实践,并致力于开发理解与总结的学习方法。作为 ICLR 2017 的白金赞助商,谷歌有超过 50 名研究者出席本次会议(大部分为谷歌大脑团队及谷歌欧洲研究分部的成员),通过在现场展示论文及海报的方式,为建设一个更完善的学术研究交流平台做出了贡献,也是一个互相学习的过程。此外,谷歌的研究者也是 workshops 及组委会构建的中坚力量。

如果你来到 ICLR 2017,我们希望你能在我们的展台前驻足,并与我们的研究者进行交流,探讨如何为数十亿人解决有趣的问题。

以下为谷歌在 ICLR 2017 展示的论文内容(其中的谷歌研究者已经加粗表示)

区域主席

George Dahl, Slav Petrov, Vikas Sindhwani

程序主席(雷锋网此前已经做过相关介绍)

Hugo Larochelle, Tara Sainath

受邀演讲论文

Understanding Deep Learning Requires Rethinking Generalization (Best Paper Award)

Chiyuan Zhang*, Samy Bengio, Moritz Hardt, Benjamin Recht*, Oriol Vinyals

Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (Best Paper Award)

Nicolas Papernot*, Martín Abadi, úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

Shixiang (Shane) Gu*, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine

Neural Architecture Search with Reinforcement Learning

Barret Zoph, Quoc Le

Poster 论文

Adversarial Machine Learning at Scale

Alexey Kurakin, Ian J. Goodfellow?, Samy Bengio

Capacity and Trainability in Recurrent Neural Networks

Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo

Improving Policy Gradient by Exploring Under-Appreciated Rewards

Ofir Nachum, Mohammad Norouzi, Dale Schuurmans

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean

Unrolled Generative Adversarial Networks

Luke Metz, Ben Poole*, David Pfau, Jascha Sohl-Dickstein

Categorical Reparameterization with Gumbel-Softmax

Eric Jang, Shixiang (Shane) Gu*, Ben Poole*

Decomposing Motion and Content for Natural Video Sequence Prediction

Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

Density Estimation Using Real NVP

Laurent Dinh*, Jascha Sohl-Dickstein, Samy Bengio

Latent Sequence Decompositions

William Chan*, Yu Zhang*, Quoc Le, Navdeep Jaitly*

Learning a Natural Language Interface with Neural Programmer

Arvind Neelakantan*, Quoc V. Le, Martín Abadi, Andrew McCallum*, Dario Amodei*

Deep Information Propagation

Samuel Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein

Identity Matters in Deep Learning

Moritz Hardt, Tengyu Ma

A Learned Representation For Artistic Style

Vincent Dumoulin*, Jonathon Shlens, Manjunath Kudlur

Adversarial Training Methods for Semi-Supervised Text Classification

Takeru Miyato, Andrew M. Dai, Ian Goodfellow?

HyperNetworks

David Ha, Andrew Dai, Quoc V. Le

Learning to Remember Rare Events

Lukasz Kaiser, Ofir Nachum, Aurko Roy*, Samy Bengio

Workshop Track

Particle Value Functions

Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh

Neural Combinatorial Optimization with Reinforcement Learning

Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio

Short and Deep: Sketching and Neural Networks

Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

Explaining the Learning Dynamics of Direct Feedback Alignment

Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein

Training a Subsampling Mechanism in Expectation

Colin Raffel, Dieterich Lawson

Tuning Recurrent Neural Networks with Reinforcement Learning

Natasha Jaques*, Shixiang (Shane) Gu*, Richard E. Turner, Douglas Eck

REBAR: Low-Variance, Unbiased Gradient Estimates for Discrete Latent Variable Models

George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein

Adversarial Examples in the Physical World

Alexey Kurakin, Ian Goodfellow?, Samy Bengio

Regularizing Neural Networks by Penalizing Confident Output Distributions

Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton

Unsupervised Perceptual Rewards for Imitation Learning

Pierre Sermanet, Kelvin Xu, Sergey Levine

Changing Model Behavior at Test-time Using Reinforcement Learning

Augustus Odena, Dieterich Lawson, Christopher Olah

* 工作内容在谷歌就职时完成

? 工作内容在 OpenAI 任职时完成

详细信息可访问research.googleblog了解,编译。

雷锋网版权文章,未经授权禁止转载。详情见转载须知。

冲压件仓储笼

板栗炒货机货源

钱币自动兑换机

隔板