Almost negating the results of existing peers, Google's machine learning research won the best paper of ICML2019

www.tmtpost.com/4001006.html

Image Source @视觉中国

Image Source @视觉中国

Titanium media note:This article comes from WeChat Public Account Quantum Bit(ID:QbitAI), by Xiaocha Chestnut Annie, and the titanium media is released by authorization.

The best paper of ICML 2019 is coming!

This year, a total of 3424 papers were submitted to the annual International Conference on Machine Learning, which received 774. There are two papers that stand out from thousands of troops and become the best ICML 2019 paper.

""

This wise and courageous study almost completely denied the existing peers' results and proved that Hinton's previous views were problematic:

The other is "Convergence Rate of Sparse Gaussian Process Regression" by the three researchers at Cambridge University.

Take a closer look at the best research this year:

Best thesis 1: separated representation, no unsupervised learning

To sum up in one sentence: the team of Google Brain, ETH Zurich, and Maple tested 12,000 models and issued serious questions about the existing unsupervised separation characterization learning research.

It is an important challenge for depth study to understand the high-dimension data and to use the unsupervised way to distill the knowledge into useful representation.

One way is to use a disentangled representation:

The model captures a variety of mutually independent features, and if one of the features changes, the other features are not affected.

Once successful, this method can make real-world machine learning systems, whether for robots or autopilot, to cope with scenes that have not been seen in training.

However, in the unsupervised separation of characterization learning, recent research is difficult to see how good these methods are, and how big the limitations are.

The Google AI team made a large-scale assessment of the various recent achievements. The results of the assessment present serious challenges to existing research. And provide some suggestions for the study of separated learning in the future.

What is a mass-scale assessment? The Google team trained 12,000 models to cover the most important methods currently, as well as the evaluation metrics.

Importantly, the code used in the evaluation process, as well as 10000 pre-training models, have been released.

Together, they make up a huge library called disentanglement_lib.. Let later researchers easily stand on the shoulders of their predecessor.

After a massive test, Google has identified two major issues:

1. There is no empirical evidence to show that there is no supervised method to learn a reliable separation representation because the random seed and superparameters appear to be more important than the Model Choice.

That is to say, even if a large number of models are trained, some of them get a separate representation, and it is difficult to find these features without looking at the ground truth label.

In addition, the use of super-parameter values ​​is not easy to use in multiple data sets.

The Google team said that these results are consistent with their theorem:

In the case where the data set and model do not have Inductive Biases, it is impossible to learn the separation characterization using an unsupervised method.

In other words, you must add a premise to the dataset and model.

2. On the models and datasets participating in the evaluation, it is not confirmed that separation representation is helpful to downstream tasks, for example, there is no evidence that AI can learn with fewer dimensions when separated representation is used.

The advice to the latecomers is:

1. Given the theoretical results, it is impossible to achieve separate representations of unsupervised learning without Inductive Biases. Future research should clearly describe inductive bias, as well as implicit and explicit Supervision method.

2. It is a very key problem to find a useful inductive bias for the selection of unsupervised models across datasets.

3. It should be proved that the specific advantages of learning are represented by separation.

4, experiments, there should be replicable experimental settings, suitable for a variety of data sets.

Incidentally, this is a study that selected the ICLR 2019 workshop, but it eventually became the best paper for ICML.

Best Paper 2: Rate of Convergence for Sparse Variational Gaussian Process Regression

ICML's second best paper this year is from Cambridge University and machine learning platform Prowler.io.

An excellent variational approximation of a posteriori of the Gaussian process has been developed. When the data set size is N, the computational time complexity is O (N3), the computational cost is reduced to O (NM2), where M is a number that is far less than N.

Although the computational cost is linear for N, the real complexity of the algorithm depends on how to increase M to ensure a certain approximate quality.

This paper solves this problem by describing the behavior of the upper bound of the backward KL divergence (relative entropy). The researchers have shown that if M grows slower than N, the KL divergence is likely to become arbitrarily small. A special case is that for a regression of a D-dimensional normal distribution input with a common square-index kernel, as long as M = O(logD N) is sufficient to ensure convergence.

The results show that with the growth of the dataset, the posterior probability of Gao Si process can be easily approximated, and provides a specific rule for how to add M to the continuous learning scene.

The researchers prove that the boundary of KL divergence from sparse generalized regression variation approximation to posterior generalized regression depends only on the attenuation of eigenvalues of covariance operators of previous kernels.

This boundary demonstrates that the smooth core of the training data set in a small area allows for high quality, very sparse approximation. When M-N, the real sparse non-parametric inference can still provide a reliable estimate of the boundary likelihood and the point-by-point posterior.

The author concludes by pointing out that the extension of models with non-conjugation possibilities, especially the additional errors introduced by sparsity in the framework of Hensman et al., provides a promising direction for future research.

The lead author of this article is David Burt, a Ph.D. graduate student from the Department of Information Engineering at Cambridge University. His main research areas are Bays nonparametric and approximate reasoning.

One of the authors, Mark van der Wilk, is a researcher at Prowler.io. He is also a Ph.D. student in machine learning at the University of Cambridge. His main areas of research are Bayesian reasoning, reinforcement learning, and Gaussian process models.

7 best paper nominations

In addition to the 2 best papers, there are 7 papers for best paper nominations, respectively:

1. Analogies Explained: Towards Understanding Word Embeddings (University of Edinburgh)

Thesis address: https://arxiv.org/abs/1901.09813

2, SATET: Bridging deep learning and logical reasoning using a differentiable satisfiability solver (CMU, University of Southern California, etc.)

Paper address: https://arxiv.org/abs/1905.12149

3, A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks (University of Thackeray, Paris, etc.)

Paper address: https://arxiv.org/abs/1901.06053

4,Towards A Unified Analysis of Random Fourier Features(牛津大学,伦敦过国王学院)

Paper address:https://arxiv.org/abs/1806.09178.

5, mortized Monte Carlo Integration (Oxford University, etc.)

Paper address: http://www.gatsby.ucl.ac.uk/~balaji/udl-camera-ready/UDL-12.pdf

6, Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning (MIT,DeepMind, Princeton)

Paper address: https://arxiv.org/abs/1810.08647

7. Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement (University of Amsterdam, etc.)

Paper address:https://arxiv.org/abs/1903.06059.

Many domestic universities list

Compared with the usual, this year's ICML is particularly lively.

Boshi, a German company, seized the data from the ICML 19 official network, and statistics the acceptance ratio of the paper, the most important institutions and the most important individual authors. Many domestic universities and scholars are on the list.

Original statistical address: https://www.reddit.com/r/MachineLearning/comments/bn82ze/n_icml_2019_accepted_paper_stats/

This year, a total of 3,424 papers were submitted,774 were received, and the reception rate was 22.6%. In 2018, the amount of the ICML papers was 2473, with 621 received and the receiving rate of 25%.

Compared with last year, the number of papers submitted this year has increased a lot, but the acceptance rate has decreased.

So, among the many submission agencies, who is the one with the highest contribution?

Bosch counts the institutions that receive the papers. The ranking criteria is to measure the total amount of papers contributed by an institution. The final statistical results are as follows:

The red above shows the first author contained in each organization, and green is the last author in the last included list.

The results show that technology giant Google contributed the most, MIT second, UC Berkeley won the third place.

Among them, Tsinghua University, Peking University, Nanjing University, the Chinese University of Hong Kong, Shanghai Jiaotong University, Alibaba and other Chinese universities and companies are on the list.

""

452 papers (58.4%) were pure academic research 60 papers (7.8%). 262 papers (33.9%) from pure industry research institutions belonged to both academia and industry.

""

Of all the authors who contributed so much, which authors contributed the most? Bosch also counted this.

The results show that the University of California at Berkeley's machine learning big cattle Michael Jordan participated in the largest number of papers, EPFL (French Federal Institute of Technology) professor Volkan Cevher ranked second, UC Berkeley's Sergey Levine ranked third.

There are also a number of Chinese scholars who have a good record, Zhu Jun, the professor of computer science and technology department of Tsinghua University, Liu Tiyan of the Microsoft Asia Research Institute, and Longmen in the Software College of Tsinghua University, etc., published four papers in ICML 2019.

Portal

Finally, attach the official website of this year's ICML 2019 conference: https://icml.cc/

More exciting content, focus on Titanium Media WeChat (ID: taimeiti), or download Titanium Media App

Negative existing peer achievement Google this machine learning research win ICML2019 best pape

Read More Stories

© NVBOOK.com , New View Book , Powered by UIHU