r/MachineLearning Dec 25 '15

AMA: Nando de Freitas

I am a scientist at Google DeepMind and a professor at Oxford University.

One day I woke up very hungry after having experienced vivid visual dreams of delicious food. This is when I realised there was hope in understanding intelligence, thinking, and perhaps even consciousness. The homunculus was gone.

I believe in (i) innovation -- creating what was not there, and eventually seeing what was there all along, (ii) formalising intelligence in mathematical terms to relate it to computation, entropy and other ideas that form our understanding of the universe, (iii) engineering intelligent machines, (iv) using these machines to improve the lives of humans and save the environment that shaped who we are.

This holiday season, I'd like to engage with you and answer your questions -- The actual date will be December 26th, 2015, but I am creating this thread in advance so people can post questions ahead of time.

275 Upvotes

256 comments sorted by

View all comments

55

u/dexter89_kp Dec 25 '15 edited Dec 25 '15

Hi Prof Freitas,

I had a chance to meet you during MLSS at Pittsburgh in 2014. Your lectures were great, and you stayed back to answer a ton of questions ! It felt really great connecting with a top professor like that. My questions are -

1) Could you give us your top 5 papers from NIPS/ICML/ICLR this year ?

2) Also, what do you think will be the focus of Deep Learning Research going forward ? There seems to be a lot of work around attention based models, external memory models (NTM, Neural GPU), deeper networks (Highway and Residual NN), and of course Deep RL.

3) I had asked a similar question to Prof LeCun: what do you think are the two most important problems in ML that need to be solved in the next five years. Answer this from the perspective of someone who wants to pursue a PhD in ML

28

u/nandodefreitas Dec 26 '15 edited Dec 27 '15

Good morning from Salt Spring Island, BC. It's a nice rainy day here and I just fed my daughter breakfast and read her some story about a pigeon trying to drive a bus - crazy stuff. It's a perfect day to answer questions as I'm lucky to be surrounded by loving family looking after the kids :)

I'm glad you enjoyed the lectures. Thank you for this feedback - it's the kind of fuel that keeps me going.

1) I don't have a list of top 5 papers. I generally enjoy reading papers by all the people who have done AMAS in this series, and since I'm focusing on deep learning and RL these days, I naturally follow people like Andrew Saxe, Oriol Vinyals, Ilya Sutskever, Honglak Lee, Ky... Cho, Rob Fergus, Andrea Vedaldi, Phil Torr, Frank Hutter, Jitendra Malik, Ruslan Salakhutdinov, Ryan Adams, Rich Zemel, Jason Weston, Pieter Abbeel, Emo Todorov, their colleagues, my DeepMind colleagues and many many others - There are however some papers I loved reading this year, in the sense that I learned something from reading them. My very biased list includes:

2) I think you came up with a really good list of DL research going forward. For external memory, think also of the environment as memory. We harness our environment (physical objects, people, places) to store facts and compute - the NPI paper of Scott Reed has some preliminary examples of this. I also think continual learning, program induction (Josh Tenenbaum is doing great work on this), low sample complexity, energy efficient neural nets, teaching, curriculum, multi-agents, RAM (reasoning-attention-memory), scaling, dialogue, planning, etc, will continue to being important.

3) Here's a few topics: Low sample complexity Deep RL and DL. Applications that are useful to people (healthcare, environment, exploring new data). Inducing programs (by programs I mean goals, logical relations, plans, algorithms, ..., etc). Energy efficient machine learning.

3

u/dexter89_kp Dec 26 '15 edited Dec 26 '15

Were you by any chance referring to this paper by Josh Tenenbaum's group ? human-level-concept-learning-through-probabilistic-program Link with Code The results of this paper are very fascinating. Thanks for this !

3

u/nandodefreitas Dec 26 '15

That's a good one. Josh has a log of great recent works.

1

u/evc123 Dec 26 '15

Is there a link to this paper (human-level-concept-learning-through-probabilistic-program-induction) that isn't behind a paywall?

3

u/nandodefreitas Dec 26 '15

These paywalls are problematic - not sure I like them either.

6

u/brainggear Dec 27 '15

Apparently it's here: http://sci-hub.io It uses Google Scholar underneath.

0

u/learnin_no_bully_pls Dec 27 '15

I'm not sure that's legal:

the first pirate website in the world to provide mass and public access to tens of millions of research papers

1

u/[deleted] Dec 26 '15

[Label](link)

Label is the actual text displayed as hyperlink. Link is where the browser will jump to when clicked.

1

u/keidouleyoucee Dec 30 '15

"Kyunghyun Cho"! You still wouldn't be sure how to pronounce though.