in

Mila Proposes New Inductive Biases to Boost Deep Learning

Mila Proposes New Inductive Biases to Boost Deep Learning

Can we build machines able to learn and work seamlessly with humans? How do machines, humans and animals learn from each other, and can we improve on these processes and implement them in new domains? In the new paper Inductive Biases for Deep Learning of Higher-Level Cognition, Yoshua Bengio and Anirudh Goyal from Mila – Quebec AI Institute delve into human and non-human animal intelligence and how it can inform deep learning.

image.png

Bengio and Goyal propose that deep learning (DL) be extended qualitatively rather than by adding more data and computing resources. Based on the hypothesis that human and animal intelligence could be explained by a few principles rather than an encyclopedic list of heuristics, their paper explores how inductive biases may help bridge the huge gap between current deep learning (DL) and human cognitive abilities to bring DL closer to human-level AI.

The team notes that DL already incorporates several key inductive biases found in humans and non-human animals. They propose that augmenting these inductive biases — with a focus on those involving higher-level and sequential conscious processing — could advance DL from its current successes on in-distribution generalization in highly supervised learning tasks to stronger and more humanlike out-of-distribution generalization and transfer learning abilities.

Bengio and Goyal discuss inductive biases based on higher-level cognition, declarative knowledge of causal dependencies, and biological inspiration and characterization of higher-level cognition. They leverage the System 1 and System 2 dichotomy introduced by Daniel Kahneman in his book Thinking, Fast and Slow. In this classification, System 1 refers to what current deep learning is very good at — intuitive, fast, automatic, anchored in sensory perception; while System 2 represents rational, sequential, slow, logical, conscious, and expressible with language. The researchers say DL models that can perform System 2 tasks by taking advantage of the computational workhorse of System 1 abilities will be better equipped to deal with dynamic, changing conditions, i.e., will learn to think and behave more like humans.

They wrap up the paper by identifying a number of open questions and paths for future DL research:

  • Jointly learn a large-scale encoder and a large scale causal model with high-level variables
  • Unify declarative knowledge representation and inference mechanism with attention and modularity in a single architecture
  • Innovation in neural architecture with low-level programming and hardware design requirements
  • Inductive biases in novel planning methods
  • Computation over modules and over data points
  • Scaling to large number of modules
  • Macro and Micro Modules

Bengio and Goyal note that the ideas presented on the use of inductive biases remain in the early stages of maturation, and much work needs to be done to improve understanding and to find appropriate ways to incorporate such priors in neural architectures and training frameworks.

The paper Inductive Biases for Deep Learning of Higher-Level Cognition is on arXiv.

Analyst: Rober Tian | Editor: Michael Sarazen; Fangyu Cai

Source: syncedreview.com

What do you think?

48 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Accurate Neural Network Computer Vision Without The ‘Black Box’

(video) Accurate Neural Network Computer Vision Without The ‘Black Box’

IBM Touts STT MRAM Technology at IDEM 2020

IBM Touts STT MRAM Technology at IDEM 2020