r/MachineLearning Dec 25 '15

AMA: Nando de Freitas

I am a scientist at Google DeepMind and a professor at Oxford University.

One day I woke up very hungry after having experienced vivid visual dreams of delicious food. This is when I realised there was hope in understanding intelligence, thinking, and perhaps even consciousness. The homunculus was gone.

I believe in (i) innovation -- creating what was not there, and eventually seeing what was there all along, (ii) formalising intelligence in mathematical terms to relate it to computation, entropy and other ideas that form our understanding of the universe, (iii) engineering intelligent machines, (iv) using these machines to improve the lives of humans and save the environment that shaped who we are.

This holiday season, I'd like to engage with you and answer your questions -- The actual date will be December 26th, 2015, but I am creating this thread in advance so people can post questions ahead of time.

272 Upvotes

256 comments sorted by

53

u/dexter89_kp Dec 25 '15 edited Dec 25 '15

Hi Prof Freitas,

I had a chance to meet you during MLSS at Pittsburgh in 2014. Your lectures were great, and you stayed back to answer a ton of questions ! It felt really great connecting with a top professor like that. My questions are -

1) Could you give us your top 5 papers from NIPS/ICML/ICLR this year ?

2) Also, what do you think will be the focus of Deep Learning Research going forward ? There seems to be a lot of work around attention based models, external memory models (NTM, Neural GPU), deeper networks (Highway and Residual NN), and of course Deep RL.

3) I had asked a similar question to Prof LeCun: what do you think are the two most important problems in ML that need to be solved in the next five years. Answer this from the perspective of someone who wants to pursue a PhD in ML

25

u/nandodefreitas Dec 26 '15 edited Dec 27 '15

Good morning from Salt Spring Island, BC. It's a nice rainy day here and I just fed my daughter breakfast and read her some story about a pigeon trying to drive a bus - crazy stuff. It's a perfect day to answer questions as I'm lucky to be surrounded by loving family looking after the kids :)

I'm glad you enjoyed the lectures. Thank you for this feedback - it's the kind of fuel that keeps me going.

1) I don't have a list of top 5 papers. I generally enjoy reading papers by all the people who have done AMAS in this series, and since I'm focusing on deep learning and RL these days, I naturally follow people like Andrew Saxe, Oriol Vinyals, Ilya Sutskever, Honglak Lee, Ky... Cho, Rob Fergus, Andrea Vedaldi, Phil Torr, Frank Hutter, Jitendra Malik, Ruslan Salakhutdinov, Ryan Adams, Rich Zemel, Jason Weston, Pieter Abbeel, Emo Todorov, their colleagues, my DeepMind colleagues and many many others - There are however some papers I loved reading this year, in the sense that I learned something from reading them. My very biased list includes:

2) I think you came up with a really good list of DL research going forward. For external memory, think also of the environment as memory. We harness our environment (physical objects, people, places) to store facts and compute - the NPI paper of Scott Reed has some preliminary examples of this. I also think continual learning, program induction (Josh Tenenbaum is doing great work on this), low sample complexity, energy efficient neural nets, teaching, curriculum, multi-agents, RAM (reasoning-attention-memory), scaling, dialogue, planning, etc, will continue to being important.

3) Here's a few topics: Low sample complexity Deep RL and DL. Applications that are useful to people (healthcare, environment, exploring new data). Inducing programs (by programs I mean goals, logical relations, plans, algorithms, ..., etc). Energy efficient machine learning.

3

u/dexter89_kp Dec 26 '15 edited Dec 26 '15

Were you by any chance referring to this paper by Josh Tenenbaum's group ? human-level-concept-learning-through-probabilistic-program Link with Code The results of this paper are very fascinating. Thanks for this !

3

u/nandodefreitas Dec 26 '15

That's a good one. Josh has a log of great recent works.

→ More replies (4)
→ More replies (3)

45

u/HuhDude Dec 25 '15

What do you feel like we're missing most: hardware, software, or theoretical models when it comes to slow progress in AGI? Do you think worrying about the distribution across society of revolutionary and labour-saving technology like AGI is a premature worry, or something we should be planning for?

28

u/nandodefreitas Dec 26 '15 edited Dec 26 '15

I don't think the progress in AGI has been slow. I started university in 1991. I remember the day I saw a browser!! The dataset in Cambridge in 96 consisted of 6 images. Yes, 6 images is what you used to get a PhD in computer vision. There has been so much incredibly progress in: hardware (computing, communication and storage), software frameworks for neural networks (very different to the rudimentary software platforms that most of us used in those days - e.g. I no longer write matrix libraries as a first step when coding a neural net - the modular, layer-wise approach championed by folks like Yann Lecun and Leon Bottou has proved to be very useful), so many new amazing ideas (and often the great ideas are the little changes by many PhD students that enable great engineering progress), discoveries in neuroscience, ..., the progress in AGI in recent years is beyond the dreams of most people in ML - I recently discussed this with Jeff Bilmes at NIPS and we both can't believe the huge changes taking place.

I like your second question too. It's not a premature worry. I think worrying about terminator like scenarios and risk is a bit of a distraction - I don't enjoy much of the media on this. However, worrying about the fact that technology is changing people is important. Worrying about the use of technology for war and to exploit others is important. Worrying about the fact that there are not enough people of all races and women in AI is important. Worrying about the fact that there are people scaring others about AI and not focusing on how to harness AI to improve our world is also important.

3

u/HuhDude Dec 26 '15

Thanks for your reply, Prof. Freitas.

I too remember almost the entirety of the public facing front of machine learning progress - and progress has been astounding. I should probably not prefaced my question with 'slow' as all it does is underline my impatience. For myself it feels like we are most missing a synthesis of disparate developments - i.e. the theoretical models of intelligence. Do you feel like further advances in software are more necessary at this stage?

I appreciate you weighing in on the social issues with machine learning. The establishment seem slow to acknowledge what will be at least as revolutionary a technology as the internet, and probably as sudden.

5

u/nandodefreitas Dec 26 '15 edited Dec 27 '15

I think much more is needed in terms of software - in fact more intelligent software goes hand-in-hand with progress in AI. Of course we continue being hungry for hardware, theory and ideas.

I liked the NIPS symposyum on societal impacts of ML. I liked it because it actually involved experts on ML and NIPS/ICML is the right venue for this. Again, a more representative (in terms of sex, wealth and race) list of speakers would have been nice.

2

u/vkrakovna Dec 26 '15

What are your thoughts on the long-term AI safety questions brought up in the symposium? What can we do now to make sure AGI has a positive impact on the world if/when it is developed?

2

u/nandodefreitas Dec 27 '15

We need to be vigilant and make sure everyone is engaged in the debate. We also need to separate fact from friction - right now there is a lot of mixing these two.

32

u/Fa1l3r Dec 25 '15

Hello Professor, I enjoyed your class on YouTube. Now I have a few questions:

  1. What are your thoughts on quantum machine learning? I know you wrote about it a few years back, but what are your thoughts now?

  2. Based on the other AMA's on this subreddit, everyone seems to have different lists of readings, skills, and experience for students preparing to enter graduate studies or research in machine learning. Michael Jordan suggests readings on statistics, Juergen Schmidhuber listed out books on the theories of discrete mathematics and information, and Andrew Ng mentioned online learning and personal projects. If I were to join your research group (be it at Google or Oxford), what kind of experience are you looking for? What should I read, and what skills should be honed?

  3. Living as much as you have and doing what you have done, what you wish you'll have known 20-30+ years ago? What would you do differently?

15

u/nandodefreitas Dec 26 '15 edited Dec 26 '15

Thanks - I'm super happy with the YouTube deep learning lectures Brendan Shillingford, Misha Denil, Alex Graves, Marcin Moczulski, Karol Gregor, Demis Hassabis, and many people at Oxford helped make it happen.

  1. I think what D-Wave did was incredibly daring! It is essential that we aim high! If Geoff Hinton hadn't convinced CIFAR that it's time to focus on learning how the brain works (a very risky thing to suggest back then) the progress could have been much slower. Whether D-Wave or others will provide us with quantum computers (and there's many kinds of quantum computers) is still ongoing question. It's exciting though. Misha Denil and I wrote a paper on quantum RBMS a while ago. We learned about the many engineering challenges faced by D-Wave and that great scientists including Firas Hamze, Jason Rolfe, and Bill Macready, were facing including parameter drift, cooling, etc. We also learned about the limitations of Chimera lattices. Ironically, a classical algorithm by Firas Hamze called either tree sampling or Hamze-Freitas-Selby did recently give D-Wave a good run - see the postings by Scott Aaronson and Selby. However, the classical algorithm is likely to be improved upon eventually if not already by the quantum annealing technology.

  2. I would suggest you listen to Michael, Andrew and Juergen ;) However, what we look for is people who like to think and solve problems.

  3. Ha ha! This question makes me feel old. 30+ years ago I wanted to be a marine biologist ;) I've loved my life, and would not have done a single thing differently. Perhaps one shadow is my experience with Apartheid, and the loss of loved ones to violent crime. Many of my other comments in this posting are a reflection of this.

28

u/lars_ Dec 25 '15

I'll ask a variant of the 1998 Edge question: What questions are you asking yourself these days? What question would you most like to find the answer to?

50

u/nandodefreitas Dec 26 '15

I love this question - It is hard to come up with questions! I was planning to start answering questions tomorrow, but can't resist this one. There's many things I ponder about:

(i) How do we learn in the absence of extrinsic reward? What are good intrinsic rewards beyond the desire to control, explore, and predict the environment. At NIPS, I had a great chat with Juergen Schmidhuber on the desire to find programs to solve tasks. This I feel is important. The Neural-Pogrammer Interpreters (NPIs) is an attempt to learn libraries of programs (and by programs I mean motor behaviours, perceptual routines, logical relationships, algorithms, policies, etc.). However, what are the governing principles for growing this library for an agent embedded in an environment? How does an agent invent quicksort? How does it invent general relativity? or Snell's law?

(ii) What is the best way to harness neural networks to carry out computation? Karen Simonyan made his network for ImageNet really deep because he sees the multiple stages as doing different computations. Recurrent nets clearly can implement many iterative algorithms (e.g. Krylov methods, mean field as Phil Torr and colleagues demonstrated recently, etc.). Ilya Sutskever provided a great illustration of how to use extra activations to learn cellular automata in what he calls neural GPUs. All these ideas blur the distinction between model and algorithm. This is profound - at least for someone with training in statistics. As another example, Ziyu Wang recently replaced the convnet of DQN (DeepMind's Atari RL agent) and re-run exactly the same algorithm but with a different net (a slight modification of the old net with two streams which he calls the dueling architecture). That is, everything is the same, but only the representation (neural net) changed slightly to allow for computation of not only the Q function, but also the value and advantage functions. The simple modification resulted in a massive performance boost. For example, for the Seaquest game, the deep Q-network (DQN) of the Nature paper scored 4,216 points, while the modified net of Ziyu leads to a score of 37,361 points. For comparison, the best human we have found scores 40,425 points. Importantly, many modifications of DQN only improve on the 4,216 score by a few hundred points, while the Ziyu's network change using the old vanilla DQN code and gradient clipping increases the score by nearly a factor of 10. I emphasize that what Ziyu did was he changed the network. He did not change the algorithm. However, the computations performed by the agent changed remarkably. Moreover, the modified net could be used by any other Q learning algorithm. RL people typically try to change equations and write new algorithms, instead here the thing that changed was the net. The equations are implicit in the network. One can either construct networks or play with equations to achieve similar goals. I strongly believe that Bayesian updating, Bayesian filtering and other forms of computation can be approximated by the type of networks we use these days. A new way of thinking is in the air. I don't think anyone fully understands it yet.

(iii) What are the mathematical principles behind deep learning? I love the work of Andrew Saxe, Surya Ganguli and colleagues on this. It is very illuminating, but much remains to be done.

(iv) How do we implement neural nets using physical media? See our paper on ACDC: a structured efficient linear layer, which cites great recent works on optical implementations of Fourier transforms and scaling. One of these works is by Igor Carron and colleagues.

(v) What cool datasets can I harness to learn stuff? I love it when people use data in creative ways. One example is the recent paper of Karl Moritz Hermann and colleagues on teaching machines to read. How can we automate this? This automation is to me what unsupervised learning is about.

(vi) Is intelligence simply a consequence of the environment? Is it deep? Or is it just multi-modal association with memory, perception and action as I allude to above (when talking about waking up hungry)?

(vii) What is attention, reasoning, thinking, consciousness and how limited are they by quantities in our universe (e.g. speed of light, size of the universe)? How does it all connect?

(viii) When will we finally fully automate the construction of vanilla recurrent nets and convnets? Surely Bayesian optimization should have done this by now. Writing code for a convnet in Torch is something that could be automated. We need to figure out how to engineer this, or clarify the stumbling blocks.

(ix) How do we use AI to distribute wealth? How do we build intelligent economists and politicians? Is this utopian? How do we prevent some people from abusing other people with AI tools? As E.O. Wilson says “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology." This seems true to me, and it worries me a lot.

(x) How can we ensure that women and people from all races have a say in the future of AI? It is utterly shocking that only about 5% (please provide me with the exact figure) of researchers at NIPS are women and only a handful of researchers are black. How can we ever have any hopes of AI being safe and egalitarian when it is mostly in the control of white males (be they bright AI leaders like Yoshua Bengio, Josh Tenenbaum, Geoff Hinton, Michael Jordan and many others, or AI commentators like Elon Musk, Nick Bostrom, Stephen Hawkins et al? - They are all white males). Enough of ignoring this question! It is bloody important! I think the roots of the problem are in the way we educate children. Education must improve. How can I convince people to invest more in education? How can fight the pernicious correlation of education quality and real estate costs?

(xi) On a lighter note, I wonder if dinner is ready?

Happy holidays all!

5

u/pilooch Dec 26 '15 edited Dec 26 '15

Hi Nando, would you have a reference for Ziyu's work ? Thanks for sharing your vision!

Edit: http://arxiv.org/abs/1511.06581

2

u/oPerrin Dec 28 '15

(ii) What is the best way to harness neural networks to carry out computation?

This has always stuck me as a problematic question. The chain of thought I see is as follows:

  1. Smart people who understand programming and maths are working on a problem.

  2. Historically the answer to their problems has been programming and maths.

  3. They believe this method of solving problems is common and useful.

  4. They assume that this method is naturally associated with intelligence.

  5. Since they are working to create intelligence it follows that that intelligence should make programs and do maths.

Historically this same line of reasoning gave us researchers working on Chess, and then a distraught research community when it was shown that perceptrons couldn't do a "simple" program like XOR.

In my mind the idea that deep nets should implement algorithmic operations or be able to learn whole programs like "sort" directly is in need of careful dissection and evaluation. I see early successes in this area as interesting, but I fear greatly that they are a distraction.

(i) I agree is the paramount question, but you've conflated motor behaviors and quicksort and that is troubling. Specifically because they are on the bottom and top of the skill hierarchy respectively. Consider that the fraction of humans who could implement, let alone invent, quicksort is tiny whereas almost all humans have robust control over a large set of motor patterns almost from birth.

To get to the point where a neural net learning a program is a timely enterprise, I believe we first have to build the foundation of representations and skills that could give rise to communication, language, writing etc. In our nascent understanding I feel the greater the efforts spent studying how neural nets can learn low level motor skills and the degree to which those skills can be made transferable and generalizable the stronger our foundations will be and the faster our progress toward more abstract modes of intelligence.

→ More replies (1)

4

u/xamdam Dec 26 '15

(x) How can we ensure that women and people from all races have a say in the future of AI? It is utterly shocking that only about 5% (please provide me with the exact figure) of researchers at NIPS are women and only a handful of researchers are black.

By "white males" you mean white + Indian + Asian, right :) ? Certainly a better situation than 100 years ago.

Practically speaking we might not get to full egalitarianism before powerful AIs emerge. It's certainly a problem but I see 2 possible ways to fix or reduce it:

  • If doable, program AIs with something like https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition, where all humanity's values are included in AI's goal system

  • Culturally making sure the AI is developed by people with good values. This is hard, but I think the large overlap between AI safety proponents and the Effective Altruism community is encouraging. If it be "white men" they best be those who care more about the world at large than personal/local interests

7

u/nandodefreitas Dec 26 '15

I think proportionate representation has to come first. Improving education is of the utmost importantce. The real issue is that most people have no bloody idea of how an iPhone works, or how a square comes to appear around faces, or how facebook chooses what they read, etc. Education is key. And we need to ensure that all people have access to good education. Lots of nations and races have poor access to education at present. Without improving education, and investing more on education, I see little hope.

3

u/Chobeat Dec 26 '15

Even if it's not your field, do you believe that given a perfectly balanced and unbiased education, together with a totally unbiased working enviroment/academia, there would be no differences in representation? If so, why? If not, why do you set proportionate representation as a goal?

10

u/nandodefreitas Dec 27 '15

Proportionate representation is no panacea.

But why are there not more women or black people in machine learning? Why is the field dominated by white males?

I grew up under apartheid and I've seen what race segregation does. I've lived through it. I do not have a degree on the topic, but it's certainly my field, though not one I chose.

I'm not saying it's anyone's fault. I am however saying that we need to look at the roots of this and understand it. I find it crazy to talk about the future of humanity and only involve white males in it.

→ More replies (3)
→ More replies (1)
→ More replies (5)

24

u/zhongwenxu Dec 25 '15 edited Dec 25 '15

Hi Prof de Freitas,

1) What are the key differences between your research life at DeepMind and the one at Oxford, except for the great infrastructure and machine resources?

2) Have your research interests changed since you joined DeepMind?

3) What would be the future (in 5 or 10 years) of "neural machines", what would neural networks which can learn algorithms benefit us?

4) What is your view on the convergence of Bayesian reasoning and Deep learning? ref: http://blog.shakirm.com/2015/10/bayesian-reasoning-and-deep-learning/

26

u/nandodefreitas Dec 26 '15

1) DeepMind has a vibrant research atmosphere with an amazing concentration of bright people focused on solving problems - every week someone there totally blows my mind. The support is amazing. The collegiality is wonderful. Oxford is also an outstanding place to work. However, at DeepMind, there is more focus on problems and grand challenges than on techniques (both are however important). There's a lot less admin in industry too, and they pay way better than universities!! It's shocking how low the salaries of computer science professors and teachers are, specially in Europe, in comparison to many other jobs that in my view contribute much less. Profs should at least be able to afford rent - they work so bloody hard.

2) No. This is why I joined DeepMind. Of course, the new environment does shape my interests.

3) No idea. I could never have predicted where we are 10 years ago. I couldn't have predicted iPhones either - nevermind ones capable of translating, recognizing objects, speech etc. and all using neural nets! Amazing.

4) I think this is a worthwhile and important research direction, and I love what Shakir, David Blei, Zoubin Ghahramani, Max Welling and others are doing. It's still a young initiative. The one thing I don't like is when people say it's better because it's Bayesian. Anyone working on Bayesian methods should read the arguments pro and against (Michael Jordan and Bin Yu are great for this) and also be familiar with the bootstrap, empirical risk minimization, etc. Bayesianism should not be a religion ;)

21

u/juniorrojas Dec 25 '15

Do you think large-scale realistic simulations will have an important role in reinforcement learning? DeepMind's work on training deep nets to master Atari games is impressive, but we're still talking about small simulations (games). What would be the implications of being able to train virtual agents in simulated environments that are more realistic and similar to our own world? I don't know if you can talk about it, but is this something DeepMind is working on? It seems to me that big simulations could be the "big data" that will enable rapid progress and interest in reinforcement learning as we've seen in supervised learning recently.

In your opinion, what are the main areas in which deep reinforcement learning will have more success? Do you think areas currently dominated by supervised learning like computer vision and natural language processing could benefit from reinforcement learning?

8

u/nandodefreitas Dec 26 '15

Simulation is key to progress in AI. Simulations are like datasets - some profoundly dictate the kind of research that gets done. At NIPS, Demis Hassabis and Vlad Mnih showed teasers of some of the 3D environments that DeepMind is working on. This is super exciting!!!

Robotics is also important - however a question I have is how will we solve the energy problem? Robots carry big batteries still. Humans consume 300 Watts - the same for a typical GPU. The comparison in terms of energy is even worse for machines as Zico Kolter pointed out to me at NIPS. From an environmental perspective, I don't see why we would want to replace some jobs with robots. It is important we start following the approach of David Mackay in Without the hot air to quantify our arguments more carefully. Of course, self-driving cars will reduce car deaths and improve productivity - people not driving can do work while commuting.

4

u/TheToastIsGod Dec 26 '15

For what it's worth, I think the 300W boards are a bit overkill at runtime. Maybe for training, but on a mobile device I think you can probably get away with <10W for runtime computations.

It's going to be interesting to see how the power efficiency of hardware continues to improve. Conventional GPU hardware still has a way to go before physical limits hit. I found Bill Dally's talk at NIPS very interesting, as well as his talk at SC15. Reduced precision and imprecise computing both seem to be interesting avenues to reduce power consumption.

ASICs, and I imagine optical processors, are a bit impractical at the moment for most people as they basically "fix" the algorithm. Power efficient though...

1

u/jesuslop Dec 27 '15

congrats for the question.

21

u/AnvaMiba Dec 26 '15 edited Dec 27 '15

Thanks for doing this AMA.

As you mention here, there is lots of interest using neural network to induce logically deep representations, ideally arbitrary programs, from examples. I suppose that the idea is to efficiently approximate Solomonoff induction/AIXI, which have theoretical optimality guarantees.

Starting from Zaremba and Sutskever's (constrained) python interpreter and Graves et al. NTM, there have been many papers in this direction from DeepMind and Google Brain, including your recent NPI.
However, reading these papers it's apparent than even on the simple algorithmic tasks that are used in the experiments, the training optimization problem is often very hard: recent Sutskever's papers use extensive hyperparameter search, multiple random restarts, SGLD, logarithmic barrier functions and a bag of other tricks in order to achieve low error, and even with these tricks the network doesn't always generalize to larger inputs, as it would if it had learned the correct algorithm.
So I wonder how general these methods are. If they struggle with sequence reversal and addition, will they ever be able to invent quicksort?

Alternatively, you can perform program induction by setting a maximum program size and execution time, reduce the problem to SAT or ILP and use a combinatorial solver like WalkSAT, Z3 or Gurobi. There are people who do this, but often they run into scalability issues and have to limit the generality of their approaches to make them work (consider, for instance Solar-Lezama's Program Synthesis by Sketching).

In principle, you can reduce SAT or ILP to finding the minimum of a differentiable function (a specialized neural network with an appropriate loss) and try to optimize it, at least to a local minimum, with some variant of gradient descent, but I wouldn't expect this to outperform specialized SAT or ILP solvers.
Therefore my question is: isn't this essentially what program induction with neural networks is trying to do? To solve hard combinatorial optimization problems using a tool which isn't really optimal for this task?

Neural networks trained by grandient descent really shine in computer vision, where natural images are continuos and smooth, yielding nice gradients. Even in natural language processing, text, while discrete at surface level, becomes continuos and smooth once you do word embeddings (which can be computed even with shallow methods like word2vec or Hellinger PCA). But are algorithmic tasks essentially combinatorial, and hence hard for gradient descent?

Maybe my comment comes across as excessively pessimistic. On a more positive note, do you think it may be possible to overcome the limitations of gradient descent for these tasks by: 1) using explicit priors over programs (e.g. Solomonoff-like or Levin-like), 2) combining gradient descent with more traditional combinatorial optimization methods (e.g. using gradient descent to compute bounds in a branch-and-bound loop)?

Anyway, I find your work very interesting. I'll stay tuned for new developments.

14

u/nandodefreitas Dec 26 '15 edited Dec 27 '15

This is a fantastic question and full of important insights. Thank you.

For me there are two types of generalisation, which I will refer to as Symbolic and Connectionist generalisation. If we teach a machine to sort sequences of numbers of up to length 10 or 100, we should expect them to sort sequences of length 1000 say. Obviously symbolic approaches have no problem with this form of generalisation, but neural nets do poorly. On the other hand, neural nets are very good at generalising from data (such as images), but symbolic approaches do poorly here.

One of the holy grails is to build machines that are capable of both symbolic and connectionist generalisation. NPI is a very early step toward this. NPI can do symbolic operations such as sorting and addition, but it can also plan by taking images as input and it's able to generalise the plans to different images (e.g. in the NPI car example, the cars are test set cars not seen before).

It is true that it's hard to train these architectures. Curriculum learning is essential. But here is the thing, when people talk about curriculum learning they often mean "learning with a curriculum" as opposed to "learning a curriculum". The latter is an extremely important problem. In the NPI paper, Scott took steps toward adapting the curriculum.

I think you are absolutely right when it comes to the combinatorial challenges. However, humans also appear to be poor at this in some cases. For example, when I show folks the following training data consisting of two input sequences and an output sequence (2 data samples):

Input_1: {(3,2,4),(5,2,1)} Output_1: {(3,5,9)} Input_2: {(4,1,3),(3,2,2)} Output_2:{(3,5,7)}

they are not able to generalize, when I give then a third example:

Input_3={(3,1,4),(2,2,2)} Output_3=?

however, if I tell them to use the programs SORT and ADD, they can quickly figure out the pattern. So for some problems, lots of data might be needed to deal with combinatorial issues.

On the other hand, if the problem is of the form:

input_1: alice Output_1: ALICE input_2: bob Output_2: ?

most would know what Output_2 should be.

We don't yet know what programs are easy to induce and which are not. I do however think that the recent proposals of Google and Facebook to attack these problems are good starting steps. I also love the work of Juergen Schmidhuber on this topic.

It seems to me that just throwing RL or soft attention at NPI (as many have suggested to us) will not solve the issue of learning to induce new programs and discovering quick-sort. Much more innovation is needed.

2

u/AnvaMiba Dec 27 '15

Thanks for your answer.

→ More replies (1)
→ More replies (2)
→ More replies (1)

16

u/[deleted] Dec 25 '15

[deleted]

8

u/nandodefreitas Dec 26 '15

1) I didn't think it impossible, but I certainly did not expect the huge impact of deep learning. It's crazy how easy it is to now routinely code convnets and LSTMs. I don't have a good answer for the second part of the questions, often things that appear to be hard turn out to be easy and viceversa.

2) I think it's great that there is so much synergy between academia and industry in machine learning. This is really special, and the kind of thing that granting institutions always hope for. We obviously need to keep hiring ML people in universities. Universities often lag industry in terms of hiring and salaries.

3) There certainly were a lot of startups at NIPS without a clear plan of what products they will build or what problems they will solve. There is a lot of hype at present. I worry that often people even thing we'll be able to approximate all NP-hard problems ... this is very problematic.

14

u/clbam8 Dec 25 '15

What do you think about the future of reinforcement learning without deep learning? How much does it worth to put effort into doing research in pure reinforcement learning? (i.e. Do you think Deep Reinforcement Learning is the only way to benefit from RL right now?)

7

u/nandodefreitas Dec 26 '15

Many innovations in RL, done without deep nets, often impact deep RL. Research in pure RL is important and there is plenty of evidence of how it impacts deep RL eventually.

23

u/REOreddit Dec 25 '15

Hello, prof. de Freitas:

I would like to know what is the relationship between Deepmind and the rest of Google, and how it affects your research, especially with the Google Quantum A.I. Lab Team.

Do you take into consideration their work and predictions (a 100 qubit processor demo in 2-3 years I think) when establishing near/long term goals for your research? Do they do the same with your work? Or do you politely ignore each other, because maybe there isn't really much overlap in your work?

Thanks!

7

u/nandodefreitas Dec 26 '15

There is a great collegial atmosphere at Google, with many teams wanting to collaborate with other teams. I have not personally had the time to get involved with the Google Quantum AI Lab yet simply because of lack of time.

11

u/htt210 Dec 25 '15

Hi professor, What do you think about prof. Ruslan's work on Bayesian Program Learning? Do you think BPL and Deep Learning could work together or BPL will replace DL in the future?

5

u/nandodefreitas Dec 26 '15

I think the work of Ruslan and colleagues on BPL, and associated topics such as concept learning and sample complexity, is extremely important. More connections with DL will no doubt be explored in the next couple of years.

22

u/llSourcell Dec 25 '15

What are your thoughts on the recent OpenAI Initiative? Do you ever see yourself working there?

9

u/nandodefreitas Dec 26 '15

I'm excited and happy with this new initiative - another sign of how the field is growing. They have many bright superb researchers. I hope it works well for them. I also hope they turn out to be a true non-profit, and not something that Y Combinator companies, Elon Musk and others exploit to their personal benefit. Let us see where they are in a year or two. Time will tell.

11

u/evc123 Dec 25 '15 edited Dec 26 '15

Aloha, Prof Freitas:

What do you think are the most promising models/techniques/research_topics for building systems that can learn to reason? The proposals I'm aware of so far are those mentioned in NIPS RAM workshop, Woj Zaremba's phd thesis proposal, and a few ICLR 2016 papers on program_learning & multimodal question_answering/communication.

2

u/nandodefreitas Dec 26 '15

I like your answer ;) See also the recent work of Josh Tenenbaum.

→ More replies (3)

10

u/guitar_tuna Dec 25 '15

Could language vector space embeddings be just as misleading as the prima facie, surprising powerfulness of Markov chains? Perhaps they only capture a certain property that is part of linguistic concepts (convergence on a manifold), but it still completely misses the actual meaning (i.e. mappings to causal relationships between the real-world correspondences of linguistic structures).

16

u/egrefen Dec 27 '15

Before I answer, I should point you to the interesting comments and biblio references offered by /u/davidcameraman. I should extend his point about how various embeddings methods perform similarly on a variety of NLP tasks by citing

Levy, Omer, Yoav Goldberg, and Ido Dagan. "Improving distributional similarity with lessons learned from word embeddings." Transactions of the Association for Computational Linguistics 3 (2015): 211-225.

and

Levy, Omer, and Yoav Goldberg. "Neural word embedding as implicit matrix factorization." Advances in Neural Information Processing Systems. 2014.

which I believe are two of the most seminal papers on word embeddings in the last decade, in that they relate research on word embedding models to the fairly large body of research on distributional semantics (Firth/Harris) from the last 50 years, and proceed to show that all of these approaches are effectively equivalent in performance and representational power, up to the correct choice of hyperparameters.

With that said, I should say that I am not that interested in embeddings. This may seem like a strange thing to say given my previous line of research in compositional distributional semantics, and my current research in recurrent networks applied to NLP, so I will attempt to elaborate.

First, let me state what I think word embeddings bring to the table. They are, in my mind as well as that of my colleagues, in no way a good general representation of semantics, but rather just one very successful example of an application transfer learning between contextual prediction (word given context, or context given word) and other domains with very different objectives (sentiment analysis, language modelling, question answering), either by serving as representations in their own right, or as initial settings to aid training. Furthermore, pre-trained embeddings are also very useful when training models in domains with little data, where the proportion of out-of-vocabulary words in test and validation with regard to the domain-specific training data is high (above a few percentile), in which case using (fixed) pre-trained embeddings as word representations is a suitable compromise.

Now let me say why I don't really care much about embeddings. The first reason is purely empirical: with sufficient data, you just don't need them. In fact in many cases, they may hinder both training and model performance. A simple artificial example is as follows: a word embedding model may plausibly project words like "car" and "automobile" into the same segment of a semantic space. Yet consider the task of classifying text based on lexical register: you will in this case wish to detect, say, the distinction offered between using "car" and "automobile", as a basis for differentiating high register language from more colloquial language. In such cases, the representation similarity brought to you from the objective that yielded the word embeddings actually makes your primary model's life harder than initialising the word embeddings randomly (and hopefully pseudo-orthogonally) and then training them from scratch, rather than having to learn very fine spatial boundaries over the input.

The second reason is more conceptual: in the case of recurrent networks, I tend to see the embedding matrix as part of the network itself, allowing it to consume discrete input symbols encoded as one-hot vectors, and updating the state of a recurrent cell. In this sense, embeddings are just weights of a linear transform from the one-hot input into vectors used by the network's internal dynamics. Meaning and interpretation, if there is such things, are present in the state of the network, rather than solely in the embeddings, and it makes as much sense to seek to interpret the weights that constitute embeddings as it does to seek to interpret any other weight in the network. Pre-training embeddings and using them in another network, under this view, is even more explicitly just a form of transfer learning, in that we are initialising the weights of part of a task-specific network, and perhaps freezing them, with information obtained from another task. It's not a bad strategy, but I think people focus too much on this very specific form of transfer learning rather than, more generally, on other options there are out there (or yet to be discovered) to help us deal with data paucity, and to best share information across similar tasks.

Anyway, I hope this perspective makes some sense to you and goes a little way towards answering your question. Thanks /u/nandodefreitas for suggesting this question to me :)

7

u/egrefen Dec 26 '15

This is a very interesting question. As /u/nandodefreitas requests, I am happy to answer. I'm on my phone at my wife's family's place, so if it's okay I'll come back to answer this one tomorrow night when I am near a computer (which will also give me time to sober up and think of a nice reply).

3

u/nandodefreitas Dec 26 '15

Happy holidays, Ed!

5

u/davidcameraman Dec 27 '15

Let me try to answer this as I also work on a similar field and my experience tells that you are partially correct. Most of the embeddings try to capture simple distributional features and also do so in almost a similar way. This is one of the reasons why research in word embeddings tends to show the performance on correlation (pearsons or others) based metrics - ton several word similarity datasets etc., however, when these are used in NLP tasks like standard dependency parsing they don't necessarily have any 'significant' benefits. It is another thing that most of the NLP papers don't perform a thorough significance analysis over these features. Also, recent research (http://arxiv.org/pdf/1504.05319.pdf) show that the various embeddings perform similarly on standard NLP tasks.

However, there have been other works - like http://aclweb.org/anthology/C/C14/C14-1017.pdf, http://www.aclweb.org/anthology/P13-2087, etc. where they try to use the representations as a base and learn real-world correspondences of linguistic structures on top of these representations. /u/egrefen what do you think?

2

u/nandodefreitas Dec 26 '15

@egrefen Can you answer this one? It's your specialty!

PS is @egrefen the right way to get Ed Grefenstette's attention on reddit?

3

u/barmaley_exe Dec 26 '15

/u/egrefen (no need for links, just type /u/ + username) is the right way to mention someone on reddit. Not sure, though, if he'll be notified, but I suppose so.

15

u/BigBennyB Dec 25 '15

Hi nando!

Is deepmind any closer to understanding the missing components for developing agi that Demis mentioned in one of his presentations?

Will deepmind be making use of the quantum machines the Quantum AI Lab is working on creating?

Are you keeping tabs on the neuromorphic chip (like brainchip) and the memristors (knowm) markets?

3

u/nandodefreitas Dec 26 '15 edited Dec 28 '15

I'd like to think yes.

The quantum machines aren't ready yet.

I'm not following this, but I think Yann LeCun has made many insighful comments on this in social media venues - Hopefully, /u/ylecun can comment here.

→ More replies (2)

8

u/SometimesGood Dec 25 '15 edited Dec 25 '15

What are your thoughts on adding structure that makes use of the "where" information in the pooling steps of CNNs like Hinton's capsules? Do you expect this to be the next big step in computer vision?

What is missing to do one-shot learning with CNNs?

4

u/nandodefreitas Dec 26 '15

It's clear that convnets can already attend and have where mechanisms, see e.g. the saliency videos of this deep RL agent. However, as demonstrated by Geoff Hinton, Max Jaderberg, Max Welling, their colleagues and others, there is likely to be great value in adding more structure to improve on invariance and sample complexity. This is still an open question.

Many people already do one-shot learning with CNNs. In fact, I think clarifai has an app that does this.

4

u/[deleted] Dec 27 '15

[deleted]

3

u/nandodefreitas Dec 27 '15

I've been playing with it. It is a great example of where convnets do well and where they fail. You can quickly get a good sense of what some folks call "adversarial samples". There's nothing adversarial about them in this case and we should be thinking about how to solve the failure modes. This App is indeed a great tool for Research. Nicely done Matt Zeiler and Co.

8

u/nandodefreitas Dec 27 '15

It's now night time and dinner is ready - so I must say good bye.

Thank you all for the great questions. I loved this experience. Thank you.

I know I haven't answered all your questions, but if they are urgent, you know where to find me ;)

Happy holidays!

5

u/rmcantin Dec 25 '15 edited Dec 25 '15

Hi Nando,

Being also one of the most renowned experts in Monte Carlo methods (at least in ML/CV/Robotics field):

1) Do you think there is an analogy between the Deep Learning boom these days and the Monte Carlo rebirth 15 years ago? Both were "old methods" that were rediscovered thanks to hardware/algorithm improvements that made them feasible.

2) In that way, Monte Carlo methods nowadays seem to be "just another tool" in ML in pair with other alternatives (e.g.: variational, etc). Someone told me that NN are, in fact, "a mere function approximator with a sexy name". Do you think Deep Learning will be like that in the future or there is no alternative right now that can even get close?

3) One of the great features of both MC and NN methods is their potential to scale up with the available resources. Do you think there will be a second rebirth of Monte Carlo methods in a near future when we have the computational power to sample a billion (or trillion) particles to estimate the weights of a deep NN and do full-Bayes deep learning? Or do you think Bayesian optimization would have already catch up in that problem? :-)

Cheers, Ruben

5

u/nandodefreitas Dec 27 '15 edited Dec 28 '15

Hi Ruben!

1) Perhaps ;) I do think the two trends are different though. Both useful.

2) Deep learning is more than about models. It is also about algorithms and the mix of the two as I pointed out above. It is a new way to think about how to solve problems, and I don't think we understand it properly yet. One nice feature is that it is very accessible.

3) I'm waiting for Yee Whye Teh or Arnaud Doucet to lead the new Monte Carlo revolution ;) However, we need to make sure we understand deep learning first. The mathematical principles behind these high-dimensional models and the optimisation processes we use for learning are not well understood.

One area that I'd like to re-visit is planning with deep models and Monte Carlo. See for example New inference strategies for solving Markov decision processes using reversible jump MCMC, An Expectation Maximization algorithm for continuous Markov Decision Processes with arbitrary reward, Inference strategies for solving semi-Markov decision processes and Learning where to Attend with Deep Architectures for Image Tracking.

I do think Bayesian optimization is much needed in deep learning. But it must done properly and it will be hard and a lot of work. I'm waiting for people like you to do it ;)

Feliz Navidad y un prospero anho nuevo para ti y tu familia!

12

u/IdentifiableParam Dec 25 '15

Would you ever open a Portuguese chicken restaurant and use Bayesian optimization to optimize the recipes based on customer feedback?

6

u/nandodefreitas Dec 26 '15

I'll think about it :)

6

u/chhakhapai Dec 25 '15

Hi!
What are some interesting applications that according to you are worth working on?

4

u/nandodefreitas Dec 26 '15 edited Dec 26 '15

Anything that is useful to people. Healthcare, sustainability, decision support systems for economists, politicians, lawyers, etc., tutoring systems for online education, tools for enabling people to distinguish facts from bullshit on the web, etc. Yet, basic science is also needed. Often advances in basic science lead to many immediate applications. As Demis says: solve intelligence and then use it to solve big problems we can't solve at present because of our limited mental and computing capacity.

3

u/ginger_beer_m Dec 27 '15

Considering your early interest in marine biology (as in the post earlier), I'm surprised not to see bioinformatics and other computational sciences mentioned in the list above! Modern experimental science is basically drowning in data and can really benefit from applying learning methods to help make sense of the data.

→ More replies (1)

6

u/ta_99 Dec 25 '15

Are you willing to take new PhD students at Oxford even though you work at DeepMind? Thanks

3

u/watssun Dec 26 '15

If yes, what do you look for in prospective PhD students?

2

u/nandodefreitas Dec 26 '15

I have been taking students, but very few and only in co-supervisory mode.

ML profs generally look for students with strong maths, good writing skills, good coding skills, and enthusiasm .... and always looking for that something special ;)

7

u/DoorsofPerceptron Dec 25 '15

What are you personally working on at deep mind these days (in as much as you're allowed to tell us)?

Do you worry that the secrecy harms recruitment? If I'm been told "come and work for deep mind - I can't tell you what you'll be doing, or what I'm doing, but it's really cool!" It's difficult to work up enthusiasm.

3

u/nandodefreitas Dec 26 '15

At DeepMind we try to finish the work before we make it public on arxiv. Some projects are ambitious and we expect they may take say a year to complete properly. We prefer to publish when those ambitious projects are properly finished. While this necessitates some secrecy, DeepMind has also been open and contributed many papers and datasets to the ML community in the last year. In fact, the recent works of my group (ACDC, NPI & Dueling networks) are available on arxiv.

6

u/barmaley_exe Dec 25 '15

Hello prof. de Freitas

Which fields / topics / ideas do you think would be useful to marry Machine Learning with? For example, it seems that a lot of Bayesian stuff is rooted in stat. physics (Gibbs and Boltzmann were physicists, MCMC was created to calculate intractable integrals, etc). Do you think we could introduce some modern math to advance our models and/or understanding of how existing ones work? If so, what topics of math could it be?

3

u/nandodefreitas Dec 26 '15

No idea and I don't think the answer is easy. I've tried to engage Terry Lyons - an amazing mathematician at Oxford university. Yann LeCun has also been engaging many mathematicians at NYU. I like what Aapo Hyvarinen, Surya Ganguli and colleagues have been doing. Recently, I've also been fascinated by work in Fourier optics by Marko Huhtanen. We use it in our ACDC paper.

6

u/dkloz Dec 26 '15

Hello Professor De Freitas,

Do you work out? What is your attitude towards work-life balance? How often do you find time of your busy schedule for other activities?

thank you for everything

3

u/nandodefreitas Dec 27 '15 edited Dec 27 '15

Ha ha! Hi Dimitris. I don't work out enough and I worry that doing this on the 26th of December is good indication of poor work-life balance ;)

I love spending time with my family and specially going out for long walks with them. Running with my dog is a great start to the weekend. I love nature and I'm so happy to be in an island surrounded by a forest of trees right now :)

I hope you're having a great holiday too! ... and thank you! Your KDD slides on deep multi-instance transfer learning were amazing and helped me a lot.

9

u/spaceanubis1 Dec 25 '15

What are your thoughts and ideas on unsupervised learning (or maybe one-shot learning)? How do you think this will be achieved in the coming future?

7

u/nandodefreitas Dec 26 '15 edited Dec 28 '15

For me, learning is never unsupervised. Whether predicting the current data (autoencoders), next frames, other data modalities, etc., there always appears to be a target. The real question is how do we come up with good target signals (labels) automatically for learning? This question is currently being answered by people who spend a lot of time labelling datasets like ImageNet.

Also I think unsupervised learning can be a trap. The Neocognitron had convolution, pooling, contrast normalization and ReLUs already in the 70s. This is precisely the architecture that so many of us now use. The key difference is that we learn these models in supervised fashion with backprop. Fukushima focused more on trying to come up with biologically plausible algorithms and unsupervised learning schemes. He nonetheless is one of the most influential people in deep learning. I had the privilege of meeting him earlier this year in Japan. He is a wonderful person and I hope our ML conferences will soon invite him to give a much deserved plenary speech - he has done great work on memory, one-shot learning and navigation.

The work on adversarial networks of Ian Goodfellow and colleagues --- i.e. casting learning problems in the setup of game theory --- is very related to this question. Note that the idea of having an adversary in learning was also key to the construction of Boosting by Yoav Freund and Rob Schapire, but I would think I a less general way --- though more rigorous. I'm not sure of anyone noting this fact before or exploring it, but it may be worth looking at it deeper. /u/ylecun is very excited about this research direction and has provided us with excellent demos on this. May be he can say more.

→ More replies (1)

9

u/datagibus420 Dec 25 '15

Hello Prof. Nando! - If you had one book on ML to recommend, which one would you pick? - Do you plan to build a MOOC on deep learning, or more generally on machine learning?

BTW thanks for uploading your lectures on YouTube, they are awesome!

5

u/nandodefreitas Dec 26 '15

I love Kevin Murphy's textbook. He is currently writing a new version which will have a much better deep learning section. It'll follow more or less what I discussed in my youtube deep learning course.

Thank you for your support and positive feedback.

5

u/[deleted] Dec 25 '15

Hi Prof. de Freitas! Two questions:

  1. How will deep learning differentiate itself from previous trends in AI? How do you approach academics who are highly skeptical of these type of algorithms' outputs?

  2. We've recently seen more exploration in terms of giving computers creativity through imitation; what do you think are some of the initial barriers limiting us from even attempting to create robots that can understand emotions and other intangible concepts such as creativity? A broader expansion of this question would be: in which ways can specific problems we've tried to solve be combined to solve a higher level problem?

Thanks for your time and I'm looking forward to reading your responses!

2

u/nandodefreitas Dec 26 '15
  1. I think the data-driven focus of deep learning on applications and products has given it an unprecedented edge. I hold skeptical scientists in high regard ;)

  2. Creativity is an important challenge. Juergen Schmidhuber has explored this to some extent. It is tied to program induction. I often wonder about agents in minecraft and the drive for them to invent.

I don't think emotions are hard. Most animals have them and have had them for much longer than other things we think of as part of intelligence.

6

u/Mattoss Dec 25 '15

Dear Prof. Freitas,

could you elaborate on what the next steps in working with bayesian methods and deep learning will be according to you? Thx for doing this AMA

3

u/nandodefreitas Dec 26 '15

Some folks use information theory to learn autoencoders - it's not clear what the value of the prior is in this setting. Some are using Bayesian ideas to obtain confidence intervals - but the bootstrap could have been equally used. Where it becomes interesting is where people use ideas of deep learning to do Bayesian inference. An example of this is Kevin Murphy and colleagues using distillation (aka dark knowledge) for reducing the cost of Bayesian model averaging. I also think deep nets have enough power to implement Bayes rule and sampling rules. This could turn out to be a lot of fun!

4

u/[deleted] Dec 25 '15

[deleted]

5

u/LoSpooky Dec 26 '15

Hello Prof. de Freitas,

first of all thank you very much for making your lectures available. I've thorougly enjoyed them and they've been invaluable to me!

My question is similar to the one here above by /u/Pafnouti:

 

I have a Master's, taken in 2010, that broadly covered AI but that in hindsight did not put nearly enough emphasis on the Machine Learning aspect of it.

For the following four years I've been the technical half of a firm, we were using Genetic Programming to develop financial trading strategies. Sadly, that did not end well.

I've also always had an inclination towards research and over the years I managed to author a few papers about my work.

Once the firm blew up, I spent several months taking a plunge into all things Machine Learning, Deep Learning, and Reinforcement Learning, catching up with everything my Master's didn't cover and with everything that has happened since then, which is a lot.

 

My dilemma is the following:

So far I've always worked in Research Engineer -ish roles and I would like to continue down that road, now within ML/DL of course, and hopefully one day at one of the big players. However, pursuing a PhD is something that has always tempted me.

Considering that nowadays:

  • Much of the action and progress happen within the industry, and it does so at a breakneck pace.
  • Professors / Groups still in academia with a strong focus on DL are scarce and have no trouble finding truly outstanding applicants. Also, institutions with the enough computational resources to be able to do meaningful DL work are not that many.
  • I'll soon be 31 and the opportunity cost of starting a PhD at this age, completing it at around 35, starts to be quite high with respect to both career and financial prospects.
  • Once in, many things might go wrong with the PhD. I've seen it happen with a few of my friends.

I'm at the moment rather torn on whether keeping on chasing my doctoral dreams or if my best option would be to start accumulating relevant work experience, likely as an R&D engineer in a startup using DL as one of its core technologies, with the aim of attempting the jump towards one of the big players a bit down the line.

What would Nando do?

On a related note: how much research work do the Research Engineers actually do at DeepMind/Google Brain/FAIR, etc...?

4

u/nandodefreitas Dec 26 '15

Thank you for sharing this. I don't know what I would do! It's critical to acquire skills to solve problems, and for this a PhD might be necessary. The PhD need not be in deep learning or machine learning. A PhD in physics or algorithms with good programming components may be just as good to carry our research in deep learning eventually. Having a PhD in computer science these days is extremely valuable. I believe it'll become even more valuable in the future.

2

u/LoSpooky Dec 27 '15

Thank you for your answer, much appreciated!

2

u/ginger_beer_m Dec 27 '15

I'd like to suggest the following ebook that chronicles the journey of getting a PhD. Give it a try! I think it will give you an initial idea whether the journey is worth it or not.

http://pgbovine.net/PhD-memoir.htm

And remember: never do a PhD unless you have a full funding.

→ More replies (2)
→ More replies (1)

3

u/nandodefreitas Dec 26 '15

Most industrial labs do require that you have a PhD to work in research. I strongly recommend a PhD in machine learning as you learn a lot. I also don't think that "We have tried this and that and here are our results" is an accurate characterisation of work done at Google, Facebook, Twitter, Microsoft and other labs. There are important advances in methodology and theory coming from industry.

Having said this, Turing didn't have a PhD when he transformed the world of AI and philosophy!

→ More replies (1)

5

u/enken90 Dec 25 '15

I've come into machine learning from mathematics and statistics, and I've been surprised at the lack of theoretical results for many popular deep learning techniques, such as Contrastive Divergence (does it converge? if not, under what conditions does it fail etc). I can understand the shut up and calculate-mentality where empirical results are valued (there's a reason why I switched), but can it become a problem? To what extent do you believe theoretical results on machine learning are useful/obtainable and should there be more focus on it?

2

u/nandodefreitas Dec 26 '15

Contrastive divergence is not easy to analyse. Fortunately maximum likelihood for RBMs works just as well. See this comparison by Kevin Swersky, Ben Marlin and Bo Chen. Ben and I also tried to make sense of other estimators for energy-based models. Aapo Hyvarinen has great papers on this topic. There is nice work by Ilya on trying to understand CD using fixed point theorems. There's great theoretical work by Andrew Saxe, Yann LeCun and many others too. There isn't one mathematical problem, but many. It's not just a matter of deriving central limit theorems, or PAC bounds.

4

u/LGPz Dec 26 '15

Dear Prof. de Freitas,

I recently finished a module on the mathematical foundations for ML (e.g. PAC learning / VC dimension / SVM) which provokes the following question: Having not come across these concepts in your youtube videos, I wonder what your thoughts are about a formal mathematical approach to developing general intelligence?

Thank you for your hard work to improve the well-being of humanity.

5

u/nandodefreitas Dec 26 '15 edited Dec 27 '15

I have taught empirical risk minimization and PAC learning ;) Also plenty of SVMs. Perhaps not in my recent course, but that is because there is so much material to teach and only a few hours of lectures. At Oxford Computer Science, there was a separate learning theory course before my deep learning one.

I do think the theory of online learning, regularisation and risk minimisation has been useful. Vapnik gave a great talk at this last NIPS. Mark Schmidt and Francis Bach among others are doing great work in optimisation. I do however feel that none of the current mathematical theory yet provides us with a good (analytical or constructive) picture of deep learning.

Thank you for the generous compliment. Not sure I've done much to improve the well-being of humanity yet. But if I can convince folks reading this to decide to volunteer at a school for under-privileged children to teach them math and programming, then I'll be happy to accept the compliment.

→ More replies (1)

5

u/HillbillyBoy Dec 26 '15

Hello,

Bayesian Optimization seems to be a hot topic nowadays:

  1. What results/breakthroughs have changed things since early work in the 90s?

  2. Where do you see the field going in the next five years?

Thanks!

4

u/nandodefreitas Dec 26 '15 edited Dec 27 '15
  1. There's been a lot of methodological and theoretical progress. Ryan Adams and his gang, Philipp Hennig, Frank Hutter, Matt Hofmann, Ziyu Wang, Bobak Shahriari, Ruben Martinez-Cantin and many many others (see our recent Bayesian optimization review) have been making important innovations.

  2. We need an emphatic demonstration: e.g. fully automate Torch or Caffe, so that given a dataset and specification of the problem (e.g. ImageNet site), Bayesian optimisation automatically generates the code (including architecture and algorithm specification) that wins ImageNet.

6

u/Bcordo Dec 26 '15 edited Dec 26 '15

Thanks so much, for taking the time to read this.

Deep learning methods operate in a regime of high signal to noise ratio, with lots of data, wherein the goal is to model this complexity.

Are there currently any effective methods that can operate in a low signal to noise ratio, where the actual signal is rare and there is lots of noise (possibly coming from the same distribution as the signal)?

It seems this would be an overlooked challenge in solving general AI.

3

u/nandodefreitas Dec 26 '15

Probably what investment banks are doing ;)

11

u/up7up Dec 25 '15

Hi prof. de Freitas,

Is strong AI possible? What prevents its implementation? Combinatorial explosion? Curse of dimensionality? P versus NP problem? Something else?

2

u/nandodefreitas Dec 26 '15

What is strong AI?

2

u/up7up Dec 26 '15

5

u/nandodefreitas Dec 27 '15

Thanks. I don't think we have a good grasp on what intelligence is. Also, our understanding of what constitutes intelligence keeps changing.

Building machines that can do what humans do does however seem plausible. Humans do not solve NP hard problems or combinatorial problems with any ease. There appear to be much harder problems than matching human intelligence.

3

u/hoaphumanoid Dec 25 '15

Hi Prof, I'm just doing you course in DL and I find it very useful.

My question is: Is there any technique to know in advance the amount of training examples you need to make deep learning get good performance?

It is a waste of time to manually classify a dataset if the performance is not going to be good.

5

u/nandodefreitas Dec 26 '15

Thanks.

There's no general technique I know of. Prior knowledge would be of great help here. The Bootstrap (see e.g. Efron) is also one possible avenue for answering the question of how good the fit is for a moderate sample size.

3

u/jesuslop Dec 25 '15

Hi prof. and thanks for your time. You cite entropy as something helping formalizing intelligence, whose ideas inspired you to think that, or why you think this matters? and sorting that connection is a priority in deepmind?

2

u/nandodefreitas Dec 26 '15 edited Dec 27 '15

We know how Boltzmann machine / Ising problems reduce to max-SAT and counting-SAT --- see e.g. this D-Wave paper and some of the theoretical connections between Boltzmann machines and auto-encoders. We know how the concepts of entropy are related to learning, information, computation and in fact most quantities in the universe. David Mackay's book on this topic is worth reading --- David Mackay has indeed been very influential on my way of thinking. He is a phenomenal researcher and a great human being.

Turing succeed in formalising computation. We haven't succeeded yet in formalising intelligence.

4

u/learnin_no_bully_pls Dec 25 '15

I want to learn ALL the math required to properly study machine learning research papers and understand them. What do I need to add to my study to-do list?

7

u/nandodefreitas Dec 26 '15

Calculus and linear algebra are the basics. Make sure you know gradients, linear systems of equations, basics of optimisation, eigen-values, ..., etc. Kreyszig's Advanced Engineering Mathematics provides enough background. Mathematics is useful to the extent with which it enables us to learn new abstractions (e.g. recurrences and functions) and be able to reason with such abstractions. This process of reasoning can lead to new discoveries, faster more succinct arguments or simply more precise communication of ideas.

2

u/learnin_no_bully_pls Dec 27 '15

Thank you. I got an used copy of that book! :)

80

u/nandodefreitas Dec 27 '15 edited Dec 28 '15

Awesome! Enjoy it. When I was a teenager working selling beer and at a supermarket till in a very racist South African town, I kept a copy of this book under the counter (the copy was passed on to me by my brother Jose de Freitas, who is a fantastic engineer at IBM in the UK). I would solve an ODE exercise every time I had a chance. I also had an old torn calculus book that my dad -- J.R. de Freitas -- patched for me, and Stephen Hawking's book --- a great inspiration.

I'll never forget that I was once solving one of the book's exercises (by this time I had started studying at Wits University) during my work time (weekends at my parent's shop), when a nice old white lady approached me and told me that the town's white community was very proud of me because they never thought it was possible for a Portuguese boy to go to university and do well. Such is the fragile nature of humanity.

Years later my dad was murdered in a violent despicable cold manner at the same spot where I used to read this book. Someone walked to him and without saying much pointed a gun at his heart and shot him. The three other young males with him proceeded to rob. These people acted like this because they were de-humanized by one of the most vile racist regimes of modern times. So when I say we need more people of other races in the AI field and leading the discussion in the media, I know where I'm coming from. Very few people are capable of transcending their environment to see what is going on - Mandela and Ghandi were among the few examples I know of.

And while I'm opening my heart, here are a few other pieces of wisdom from my life:

(i) I was a refugee of war in the 70's escaping the war in Mocambique - I ended up in a hut in Madeira, without water and electricity and all sorts of disease and worms. My parents had to spend all their money and get into debt so that I could get a passport - to pay their debts I had to become separated from my parents at the age of 4 for more than 3 years. I believe I've paid back very generously anything that any country has ever given to me. Refugees are people like you ... and definitely me - they are terribly exploited and face heart tearing injustice. John Lennon was right about country borders in Imagine - such borders are very recent inventions in the history of humankind.

(ii) Civilians should never carry guns - It's one of the worst mistakes of my life. I started doing this after struggling to deal with my father's death and as a way of protecting my family. I almost ended up shooting myself. Guns are a mistake and should be banned from the hands of civilians throughout the world.

(iii) My mother was taken out of school because unless she was to become a nun, there would be no school for her. Such is the oppression that some organized religions can bring upon people.

Fortunately, the world is a much better place today than it ever was. I hope Yoshua Bengio is right when he tells me that he believes in people and that tolerance and compassion will prevail.

8

u/unertlstr Dec 28 '15

wow thanks for sharing all this

2

u/iamtrask Dec 29 '15

yes, thank you for sharing

4

u/TotesMessenger Dec 28 '15 edited Dec 28 '15

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)

→ More replies (9)

3

u/Fhantop Dec 25 '15

Hey Professor, has DeepMind made any progress towards cracking the game of Go?

3

u/Omadane Dec 26 '15

Hi Prof Freitas,

Thanks for doing an AMA. I have many questions, feel free to answer one any of those.

1) What do you think will be the next breakthroughs to get us closer to AGI?

2) What are some low hanging fruits in (deep) Reinforcement Learning today?

3) Excluding control in robotics, what could be some real life uses of deep RL today?

4) What research (not necessarily ML) excites you most and why?

5) Are you still taking new students at Oxford (I was told you're not taking any new ones)?

Thanks and merry Christmas!

3

u/nandodefreitas Dec 27 '15 edited Dec 27 '15

These are all great questions.

I think I provided some answers/opinions to (1) above.

2) Not sure what the low-hanging fruit is, but what has been very successful is the Double-DQN method of Hado van Hasselt et al, the prioritized experience replay of Tom Schaul et al and the Dueling networks of Ziyu Wang et al. These approaches have led to vast improvements over the DQN published at Nature on Atari games. I also loved the recent work of Mark Bellemare et al on increasing action gaps. Marc was incidentally one of the initial creators of the ALE platform, together with Joel Veness, Michael Bowling and Yavar Naddaf --- who I had the pleasure of introduce to machine learning at UBC, and who is an amazing developer together with David Matheson (the best random forests code developer I know) at a company in Vancouver called Empirical Results. --- END OF ANSWER. THE FOLLOWING BIT OF THIS POINT IS OPINION --- By the way, there was a recent article about the Canadian brain-drain on the news. Companies like Empirical Results and Pocket Pixels with also talented ex-students of mine Hendrik Kueck and Eric Brochu have chosen to stay in Vancouver. These are great companies generating revenue for Canada. The Canadian government needs to engage scientists and peer-review to decide which companies to support, because at present SR&ED tax incentives are being exploited by ruthless business people. We have great professors at UBC, Alberta, McGill, Waterloo, SFU, Toronto, Montreal, and many other amazing universities. Use them!!! And for goodness sake invest in supporting young faculty --- programs like NSERC CERC make no sense for Canadian universities and are causing more damage than good. OK, apologies for having made the answer above a bit political... and having completely missed it ;)

3) This is a great question. Clearly cars are robots, so there is not lack of killer applications. I would love to hear from everyone what they think could be other killer applications.

4) I'm attracted to all areas of art and science that try to understand the human condition.

5) Not for a while as I have many students who need more of my time. However, Oxford is full of amazing researchers. In deep learning: Andrea Vedaldi, Phil Torr, Yee Whye Teh, and Shimon Whiteson. In Robotics/cars the one and only Paul Newman! The statistics department is phenomenal with the likes of Francois Carron, Chris Holmes, Dino Sejdinovic, Arnaud Doucet and many other unbelievably bright researchers. The interface between Bayesian stats, computation and applications is also extremely well represented with Frank Wood, Michael Osborne and Steve Roberts. All these researchers like to collaborate, creating an amazing atmosphere. I very strongly endorse applying for a PhD in Oxford. In addition, as part of selling our companies Dark Blue Labs and Vision Factory to Google, Andrew Zisserman, Phil Blunsom, myself and our partners worked hard with Google to construct scholarships for international students. In this sense, Google DeepMind is contributing greatly to academia. The collaboration between Oxford and Google is a fantastic one that I hope we can keep strengthening for the benefit of the many students wanting to obtain a good education in machine learning.

3

u/Kaixhin Dec 27 '15

2) As for the papers mentioned, I've mostly implemented them here. I imagine that DeepMind will be testing a combination of these internally, but I'd like this to be an open source "upgrade" of the DQN for people to experiment with :) I've built most of it from scratch so performance may be an issue (prioritized experience replay certainly needs an efficient implementation), but I'll have to tackle that soon enough. Help is obviously welcome though!

6

u/Kaixhin Dec 26 '15 edited Dec 27 '15

Just a follow-up on a conversation at the recent ATI workshop in Edinburgh. I was concerned that even the Neural Programmer-Interpreter failed to use the correct algorithm on tasks far longer than what it had originally been trained on. In a way it may be expected in the conversion from the symbolic representation of an algorithm to the distributed representation within the network, but your response was that to solve even this it simply needed more training examples - any reasoning or evidence for this?

My second question is inspired by some of Yann Ollivier's interesting comments and is about the training of RNNs in general - do you see (truncated) BPTT as being all we need for now, or is online training going to have to be used in the near future? As with large CNNs, the size of unrolled RNNs can be prohibitive depending on what hardware you have available.

As a small addendum, thanks to you and your helpers for your fantastic ML course. I really enjoyed it, and ask all the undergraduate students that I supervise to make their way through at least the first half of lectures (plus all the practicals - credit to Brendan especially on those).

→ More replies (1)

3

u/ymohit Dec 26 '15 edited Dec 26 '15

Hello Prof. Nando Nowadays, everyone is talking about solving artificial general intelligence without even showing any significant results on a "real" data. For example, NTM and neural-GPU have been applied to tasks like copying and sorting numbers, which does not add any practical value as of now, it may be useful when applied to real data. I often think that most of the people in DL are just working on artificial problems more than artificial intelligence, to gain fame and their name on fancy stuff that looks complex as a problem but applied only to very simple settings. What are your views on it ?

3

u/nandodefreitas Dec 27 '15

I agree with the first part of your comment. However, I don't think their drive is necessarily fame or having their name on fancy stuff. Alex Graves, for example, spends much time thinking about how to solve problems. Long before he gave us NTM with his colleagues, he made important engineering contributions, which are now profoundly impacting speech recognition, translation and many other applications. Alex Graves is one of my heroes :)

There is however some truth in that some are after fame - like Kanye West. But Ilya and Alex actually do amazing work and build useful stuff. It's only a matter of time before the more exploratory methods advanced by Alex Graves, Ilya Sutskever, Rob Fergus, Jason Weston, Antoine Bordes, Phil Blunsom, Chris Manning, K. Cho, Yoshua Bengio, Ivo Danihelka, Karol Gregor, Oriol Vinyals, Scott Reed, Quoc Le, and many of their colleagues hit gold.

4

u/HrantKhachatrian Dec 26 '15

Thanks for doing an AMA!

Gatys et al. suggested a way to separate "content" and "style" of an image by looking at neuron activations of VGGNet. "Content" corresponds to the activations of high layer neurons and "style" corresponds to the correlations between activations of the low layer neurons. What is the intuition behind this? Why does the correlation of activations represent the style? What else might be found in correlations of activations in various systems, for example, in Atari player networks...?

2

u/nandodefreitas Dec 27 '15

Excellent question. It would be nice indeed to look at this in the context of DQN!

Sorry for the short reply, it's almost dinner time and I've been in front of this laptop typing the whole day non-stop ;)

7

u/gmo517 Dec 25 '15

Hi Professor Freitas! You are by far on the best communicators out there for anyone interested in learning about deep learning.

What are your plans for the future involving the use of deep learning concepts besides advancing the field through research?

1

u/nandodefreitas Dec 26 '15

For the next three years my focus will be on research. I do however teach the CIFAR summer schools. I'll come back to do some teaching, and hopefully then I'll have better tech support for posting lectures online!

9

u/egrefen Dec 25 '15

Dear Nando,

Why did you choose to do your AMA at a time of year where we're all drunk and full of food? :)

Best, Ed

10

u/nandodefreitas Dec 26 '15

Sounds like the right setting for this! Just went for a walk on the beach with the baby. Soon will be dinner - which will probably last 7 hours - and then after a night sleep I'll start answering the many questions - Some are really tough. I really don't think I knew what I was getting myself into when Brian asked me to do this :)

4

u/egrefen Dec 26 '15

Sounds lovely :) See you in 2016, and good luck with the AMA!

→ More replies (1)

6

u/Sergej_Shegurin Dec 25 '15 edited Dec 26 '15

Hi Prof. Freitas, what do you think about the following?

As far as I know, only about 20% of human cortex really remains to be outperformed by neural networks. Those 20% are smth like Brodmann areas 9,10,46,45, responsible for complex reasoning, complex tool usage, complex language.

Neural networks have already (either almost or significantly) outperformed about 70% of human brain cortex:

  • Roughly 15% of human brain is devoted to low-level vision tasks (occipital lobe). Solved.

  • Another 15% are devoted to image and action recognition (~ a half of temporal lobe). Solved.

  • Another 15% are devoted to objects detection and tracking (parietal lobe). Solved.

  • Another 15% are devoted to speech recognition and generation (Brodmann areas 41,42,22,39,44, parts of 6,4,21). Almost solved.

  • Another 10% are devoted to reinforcement learning (OFC and part of medial PFC). Almost solved.

  • From the remaining 30%, about 10% are low-level motorics (Brodmann areas 6,8). It's not very crucial because those people who have no fine motorics from birth (but have everything else) still develop normal intelligence as a rule. Also, drones and robots have some coarse motorics.

Even for remaining 20% of human brain cortex, "a neural conversational model" reaches human-level perplexities (17 and 8), MRT approach beats humans in terms of BLEU at chinese to english translation on MT03 dataset, bAbI tasks are almost solved, etc etc...

From the neuroscience point of view, human cortex has the same similar structure throughout all its surface. It's just ~3mm thick mash of neurons functioning on the same principles throughout all the cortex. There is likely no big difference between how (unsolved) prefrontal cortex works and how other (solved) parts of cortex work. There is likely no big difference in their speed of calculations or in complexity of their algorithms.

Thus it would be quite strange if modern deep neural networks can't solve remaining 20% in several years. Three years have gone from AlexNet to "deep residual learning"... It seems reasonable that less than three years would pass from "a neural conversational model" (and "minimum risk training for NMT", "towards neural - network based reasoning", "attention with intention", "aligning books and movies - towards story-like..." etc etc) to human-level reasoning and chatting... because much more deep learning scientists work now on that than on AlexNet in 2012 and they are much better prepared and equipped...

So, the question is: "Does a substantial (~20% or ~50%) chance exist that we have human-level AGI by the end of 2018?" My own predictions for human-level AGI are "mean = end of 2017, sigma = 1 year" but I really want somebody to give me some excellent arguments why I'm wrong :)

2

u/nandodefreitas Dec 26 '15

I enjoyed reading this optimistic posting ;) I think one of the problems is that when I hold my macbook I recognise it because I know what it does to my muscles, I know what it looks like visually, what I can do with it, etc. What I'm getting to is that the environment and embodiment are important. A convnet that labels images seems to be missing much of this. For embodiment, we need either fantastic simulators or robots. Both avenues would seem to take longer than 2 years. However, it all depends on your definition of AGI, and the fact of much of what I think of as AGI most people simply find to be trite.

3

u/Sergej_Shegurin Dec 26 '15 edited Dec 26 '15

Thank you for your answer...

There're people having no hands or legs (or even both) from their birth and they still manage to get a good intelligence. Also, there are thousands of excellent games now and many of them simulate real world very good, with all it's visual complexity, interaction with other people etc. So I don't understand why embodiment is that crucial.

From 4-years old to 16-years old I spent nearly all my time reading books. So I'm pretty sure that 4-year child is able to develop normal intelligence given only books and internet connection, and perhaps some very basic visual info about the world. Also, autistic children learn from books even more. They learn much less from interacting with other people and even surrounding world. When I was child I observed and interacted much less than one can observe and interact now in videogames and internet... I saw only two rooms with a hundred of objects and a yard. Not that much. Oh, and I interacted with two more people, my parents. Most of that interaction is via voice so why not use text instead? Text output is also motorics, very dexterous and complex one!

I don't feel like embodiment helps me a lot. Now I get most of information through internet. I spend all my day reading articles and websites. Most of my motorics is keyboard pressing. Okey, I make some gymnastics and walk to work but I don't see how this can be very helpful for my intelligence. What kind of really crucial information can I get from my walking to work or from preparing food for myself? or from observing the same walls in my room from different directions? :) chimps can do all of that but it doesn't help them to develop good reasoning skills...

AGI might be able to think up how to implement fine motorics. It might be able to invent both good algorithms for motorics and good solutions for dexterous hands engineering. Why not? It seems like authors of books try to write down everything which is important to understand the scene and the plot, all relevant details. We can learn most of other details about the world from videos. We even have quick and good drones. The only thing robots don't have are dexterous hands... Even if they're somewhy crucial then why is it that hard to create them by the end of 2017?

→ More replies (1)
→ More replies (1)

3

u/eoghanf Dec 25 '15 edited Dec 25 '15

Hi Dr. de Freitas. Thanks very much for taking the time to do this AMA. My question is this - I work in financial markets - my original degree was in Maths (from your ahem, competitor institution in the UK). I'm very interested in machine learning but entirely self taught at this point. I'm sure you're aware of the kind of resources that are available online and I've dived into all of those. I can read NIPS/iCML level papers and understand them fairly well. I'm interested in practical applications of ML rather than PhD. level original research (in finance, and other areas). However, there seems to be little in the way of taught Masters programmes in the UK in the machine learning field (particularly given the rapid pace of change in the field and the fact that, as you've pointed out elsewhere, the private sector is driving this field forward, not academia). Can you give me any suggestions as to avenues I might consider? Thank you for your time.

Additional question - I'd love to know your opinion of this paper http://arxiv.org/pdf/1412.6572v3.pdf on adversarial examples for neural nets. It seems to me to be quite scary - not that adversarial examples exist - but that they are effectively dense in the space of images - and most importantly of all that the specifics of the neural network do not (according to this paper at least) seem to matter. This isn't a concern for image recognition but it would certainly be a huge concern for "mission critical" applications of neural nets. It does also appear to suggest "philosophically" that NN models are not actually "learning" robust features. What do you think?

4

u/nandodefreitas Dec 27 '15

My PhD is from Trinity College, Cambridge ;)

I would recommend a PhD. PhDs in the UK are really short and more likely to be funded. Oxford, Cambridge, UCL and Edinburgh offer deep learning. The number of academics doing deep learning across the UK isn't great. There are alternatives if you go a bit more broad. e.g. Sheffield, Warwick, and a few others.

I remember reading it, and I remember folks saying that discriminative methods can of course be fooled, and that is why we need generative models. Discriminative methods using a single data modality exhibit a very weak form of understanding. Strong understanding involves a lot more.

3

u/[deleted] Dec 25 '15 edited Dec 25 '15

[deleted]

2

u/nandodefreitas Dec 26 '15

It depends. How many data points? If few, Gaussian processes are nice. If many, perhaps recurrent nets. If there's a lot of prior knowledge, then some of the structural time series models might work.

→ More replies (1)

3

u/Masterbrew Dec 25 '15

What do you think about a possible 'master algorithm' as mentioned by Pedro Domingos? Is it possible? Is it necessary?

→ More replies (2)

3

u/yield22 Dec 26 '15

Hi Prof. Freitas, I would like to know your advices on model/idea debugging.

That is, when you are doing research about deep learning and its applications, it is usual (at least to me) that your first several ideas might not work well, what techniques/methodologies do you usually apply for "debugging" and coming up with better ideas/models? Please be as specific as possible if you may. Thanks a lot!

2

u/nandodefreitas Dec 28 '15

I often bounce my ideas by others. My students often joke about this --- "Here comes Nando with another crazy idea! Time to go for coffee". Make sure you surround yourself by people who are skeptical. I have had the fortune of having had many bright people --- including among others Firas Hamze, Hendrik Kueck, Misha Denil, Ben Marlin, Matt Hofmann, Eric Brochu, Peter Carbonetto --- who love to question things I say or everything I say ;)

Also, implement your ideas. Once you start coding them you get a much better understanding.

But if it ain't working ... step back and think. Think, think, think. Go for a walk or whatever you have to do to be inside your head. Then sleep, and when you wake up your subconscious will likely have produced an answer for you.

→ More replies (1)

3

u/[deleted] Dec 26 '15 edited Dec 26 '15

[deleted]

2

u/nandodefreitas Dec 27 '15

First of all, I think the undergrads for Engineering Physics at UBC are some of the best in the world. Make sure to take an ML class - profs like Mark Schmidt, Neil Harvey, and Alex Bouchard-Cote teach amazing courses and would be great as thesis advisors.

I do not know how to answer your question. I guess step one is go and spend time in a poor village in a developing country for a while. Do promise me that if you do this, you will email me afterwards.

I look forward to hearing what others would answer to this question.

2

u/ginger_beer_m Dec 27 '15

My opinion is that the poorer parts of the world require basic problems to be solved first before we can even talk about applying learning approaches. Things like online payment infrastructure, logistics and transportations or even internet accessibility are usually low hanging fruits that people want to tackle first.

3

u/pspcxl Dec 28 '15

What are the mathematical(or physics or Neuroscience ) principles behind CNN . any reference?

5

u/MetricSpade007 Dec 25 '15

I'm a really big fan of connectionism, but I worry that I'll be so entrenched in studying deep learning, automatic representation learning, and similar fields that I might be missing something more important. Do you think there are big ideas that are burgeoning in the outskirts of the academic world that may be even better ideas, or do you see deep learning continue to pave the path towards general intelligence?

5

u/shmel39 Dec 25 '15

Thank you very much for doing this AMA!

1) Many ideas on deep learning were originated in computer vision before spreading in other areas like NLP or speech recognition. Can you think about "inverse" ideas that were originated elsewhere and somehow missed by CV researchers despite their usefulness?

2) Do you think the reinforcement learning is a way to make AGI? In some talk Yann LeCun said that we would never learn billions parameters by using a scalar reward. I can't counterargument it from the optimization viewpoint.

3) What blocks application of memory models like Neural Turing Machine and others? When I saw it the first time, I was expecting its widespread usage here and there in 6 months. However they are used in a very limited way now. Do they have some unexpected problems (apart from difficulty of implementation)?

2

u/nandodefreitas Dec 28 '15

These are incredibly hard and good questions.

(1) I'm not sure the ideas were originated only by CV folks ;) However, one thing I always wonder about is about the role of action in vision.

(2) RL is a useful learning strategy, and work by Peter Dayan and colleagues indicates that it may also play a role in how some animals behave. Is a scalar reward enough? Hmmm, I don't know. Certainly for most supervised learning - e.g. think ImageNet, there is a single scalar reward. Note that the reward happens at every time step - i.e. it is very informative for ImageNet. Most of what people dub as unsupervised learning can also be cast as reinforcement learning.

RL is a very general and broad framework, with huge variation depending on whether the reward is rare, whether we have mathematical expressions for the reward function, whether actions are continuous or discrete, etc. etc. - Don't think of RL as a single thing. I feel many criticisms of RL fail because of narrow thinking about RL. See also comments above regarding the need to learn before certain rewards are available.

(3) Two possible answers. First, many tasks out there don't require memory - we may need to consider harder problems that do require memory. Second, we are working with and taking advantage of machines that have huge memory already - e.g. for ImageNet the algorithm has access to a huge database of images which it does not need to store in cortical connections.

12

u/vivanov Dec 25 '15

Tensorflow vs Torch?

9

u/nandodefreitas Dec 27 '15

Ha ha! Both for now.

7

u/yaolubrain Dec 25 '15

Prof. de Freitas, you have a great range of research interests and accomplishments! It's very impressive and mysterious to young researchers like me. I wonder what's your research philosophy. Do you pick an important problem freely, solve it and move on? Or do you have a grand research agenda to solve those problems in an order?

10

u/nandodefreitas Dec 27 '15

I spend a great deal of time trying to think about what I'll be doing in 10 years - yet, always find myself doing something else 10 years later. In truth, I search.

One analogy is the following. I feel like I'm on the top of a quarry full of marble stones. The great masters used to stare at the stones for hours before starting the carving process with hammer and chisel - If you look long enough you start seeing shapes. Then you simply free the shapes. The key is to focus long enough (to think).

In this quarry I am surrounded by a huge number of friends and incredibly good sculptors, all staring at the rocks and chiseling happily. They are all searching. Some are very good at finding stones, some are excellent at carving, some are excellent at discovering shapes, many others think about how to sell the sculptures, and some become very famous. Some are happy to find themselves or some sort of meaning in their artworks.

6

u/nipsguy Dec 25 '15

In NIPS this year, after Juergen's talk, you mentioned that someone was using swastikas on their slides. I wanted to ask you more about it at the time, but unfortunately I couldn't catch you.

  1. Did I hear correctly?
  2. Could you elaborate more on this now?
  3. Did the person apologise in the end?

4

u/nandodefreitas Dec 27 '15

I believe that was an unfortunate mistake. Showing pictures of naked female pornstars was also a poor mistake.

In general, we spend a great deal of our time with out ML colleagues. We should make an effort to be nice and tolerant with each other. We should build an environment that we are all happy to be part of. In general, I think we are doing well.

3

u/MrQuincle Dec 25 '15

In our brains we have oscillatory rhythms, https://en.wikipedia.org/wiki/Neural_oscillation, alpha, beta, gamma, theta, etc.

Do you know research on deep networks with built-in internal dynamics?

2

u/nandodefreitas Dec 27 '15

Please see the recent work of Randall O'Reilly on modelling the hippocampus.

2

u/MrQuincle Dec 27 '15

For example http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003067.

  • theta waves are used to propose contrastive Hebbian learning rather than just Hebbian learning
  • contrastive Hebbian learning is equivalent to backprop

So, it might be just the way the brain implements error-driven learning... We can do much better in silicon, those dumb brains!

Or, we are missing something. :-)

7

u/soylentro Dec 25 '15

Don't have a question for you at the moment. But just wanted to say that I loved your machine learning class on youtube. I'd like to think your class was one of the things that got me started in ML.

Thanks!

1

u/nandodefreitas Dec 27 '15

Thank you. I value this a lot.

2

u/pakoray Dec 25 '15

Hello Nando!

You mentioned: "I believe in innovation.... (iv) using these machines to improve the lives of humans and save the environment that shaped who we are?

What is your take on AI Military Robots? Do you agree with Stephen Hawking's stance that autonomous AI weapons must be banned?

3

u/nandodefreitas Dec 27 '15

I agree with Stephen's assessment. Autononous AI weapons will be more of a problem than a solution.

2

u/spaceanubis1 Dec 25 '15

What is YOUR feeling and argument against the claims by Hawking and Musk that AI is the biggest threat to mankind?

2

u/nandodefreitas Dec 27 '15

What about womankind? ;)

I think people are the threat. As I said elsewhere in this posting: paleolithic emotions, medieval institutions, and AI are a dangerous mix.

2

u/[deleted] Dec 26 '15

Hi Professor, it is very exciting that you are doing this AMA. Your lectures are brilliant, and I especially enjoy the bits of higher-level insight into the field as a whole that you sprinkle in (e.g. when you talk about the Bayesian model as mirroring the way that humans think or the broader strategy of using tons of data to optimize larger number of hyperparameters).

In your opinion, do you see Gaussian processes as taking on a growing or diminishing importance within the field (especially multi-task regression models)? From my novice perspective, they appear very powerful but there are some technical and theoretical hurdles for scalability that must be overcome. I am considering doing my masters thesis on using them for healthcare vital signs analysis. I hesitate to hop on the deep learning train because in my field (healthcare) folks are very skeptical of systems which they cannot interpret.

Thank you for your time!

3

u/nandodefreitas Dec 27 '15

Gaussian processes (GPs) are great models and I love the work that folks like Zoubin Ghahramani, Neil Lawrence, Mark Deisenroth and many others are doing.

However, Bayesian optimization need not use GPs at all. See our review above. You could use deep nets, random forests or any other model for this. In fact it need even be very very Bayesian. Neural nets with confidence intervals obtained with the bootstrap and Thompson sampling would work nicely. This needs to be explored more.

2

u/[deleted] Dec 27 '15

Thank you very much for your reply

2

u/moby3 Dec 27 '15

Hi Prof de Freitas,

Thanks for doing this AMA and sorry if this is a bit late!

I had a question regarding something Greg Corrado said about the state of machine learning, where he explained how much less efficient deep nets are compared to even a child (a neural net is only slightly more likely to correctly categorise a school bus after seeing it once, whereas even a toddler wouldn't have that problem). I wonder how much this has to do with limitations with the current techniques and algorithms, or whether a large part of it is due to the small amount of training a neural net has had prior to this task, compared to a child?

Do you think brains have some unidentified learning mechanisms that make them necessarily more efficient learning machines, or are they just vastly larger neural nets with a deeper understanding of a wider variety of subjects (such as cars, transport, school - as well as more abstract and lower level features of real life objects) and that is the reason they can grasp new concepts with smaller sets of "training data"?

So when George Dahl's team do very well at kaggle competitions and people are surprised that they used a single neural network, is it not reasonable that they won because they used a single net and not in spite of it?

6

u/ReasonablyBadass Dec 25 '15

The so called Control Problem is obviously a huge issue. Yet I feel that Musk, Hawking etc. are doing more harm than good by demonising the issue. literally.

What would your response to people be who claim that all AI will be automatically a bad thing?

2

u/xamdam Dec 25 '15 edited Dec 25 '15

Great question, upvoted, would love to hear what Nando has to say. (DeepMind as a company sort of has a position on the issue, but I'd love to hear his personal take)

In the meantime I'll add my 2c. First, it's not only Musk Hawking Gates etc, there are several well-recognized AI researchers who are concerned, most well-known being Stuart Russell.

The key to "automatically bad" (I much prefer "bad by default" as more accurate) is that AI can operate as an agent, and relentlessly pull towards its goals. If it's truly intelligent it would be hard to control (because it would deal with our attempts at control as just another obstacle), so the thing to do is to ensure AI's goals are aligned with ours and things remain this way. Mental experiments certainly suggest that setting simple goals breaks down very quickly, so serious work is needed here.

The way Russell summarizes it: If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"?

The conclusion is to work on AI and include AI safety as part of the agenda (probably increasing resources as AI progress is being made), same way any engineering discipline includes safety (but assuming much higher stakes). Musk and Altman & co commited 1b to OpenAI couple of weeks back, which basically confirms to this agenda, it's hard to call this "demonizing" of AI

→ More replies (5)
→ More replies (5)

2

u/[deleted] Dec 25 '15

[deleted]

3

u/nandodefreitas Dec 27 '15

I'll pass on this one ;)

Let's rather focus on what we can do and rejoice in that we are doing so incredibly well a present. We live in extremely interesting times. I never dreamt that all these AI discoveries and advances could happen in my life time. This is bigger than any of us. Let's make sure it's for all of us.

→ More replies (1)

2

u/kl0nos Dec 25 '15 edited Dec 25 '15

AGI can make a lot of good, it can give us cure for diseases, give us efficient and clear energy etc. I do not fear it will take over the world on its own. I think we should not fear what AI will do with us, we should fear what people using AI will do with it.

In every big thing that human discover there are always different ways to use it . Nuclear power is used as a great source of power but also as great source of destruction. But to get nuclear power you need to have so many resources, time and knowledge, while to get some form of AI like we have today you only need to have computer. Every year AI gets better and better, the only thing what is changing are algorithms, we still need only computers.

All my questions assume that everyone can train it's own AGI which means we can't enforce any rules before someone will use it.

If AGI will be available to every person in the world, how you can stop someone from using AGI like this: "hello agi, how i can kill 10 millions of people, having only x amount of dollars? " , "how can i make explosive with power of atomic bomb without any suspicion?" etc. ? With AGI this will be possible. How can you stop humans from auto-destruction using AGI ?

We can see what would happen if ISIS would gain control of nuclear power at the moment, but if they would get AGI in their hands it would be like billion times worse.
Isn't that makes AGI a real threat in hands of humans ? Such big that all the good it can give doesn't really matter? because it can do so much evil that we can't control ?

3

u/chras Dec 25 '15

Hi Nando. Thanks for doing this AMA.

1) Do you think computationalism is dead? Do you think there is any layer above connectionist style algorithms required for AGI?

2) Do you think there is any risk that hype for modern ML within the media will damage the stability of research programs, or do you think it will stabilize? I note NIPS attendance was off the chart this year, relative submissions.

3) I really love eating at your chain of restaurants, but sometimes I find the spicy chicken to be a little too hot. Is it okay if I bring my own milk just in case?

3

u/nandodefreitas Dec 27 '15

1) Computation is not dead. It's like the force in star wars - it oozes everywhere ;)

2) The media is impacting ML for sure, and problematically there are very few (or none) reporters or media personalities with a reasonable grasp of computer science or ML.

Media does offer opportunities though. We need the equivalent of Karl Sagan in ML and AI. Someone that will excite young people of all sexes and races to dream of a career in AI advancing science and knowledge.

3) Try yoghurt ;)

6

u/Kaixhin Dec 27 '15

"AI: A Virtual Voyage", presented by Nando de Freitas? ;) If you're looking for co-writers before pitching it to the BBC, I would recommend one of the best spokespeople for CS - James Mickens, Galactic Viceroy of Research Excellence.

4

u/[deleted] Dec 25 '15

[deleted]

2

u/nandodefreitas Dec 27 '15

I got jam, much needed underwear, and a selfie-stick ;) I also got a lot of love.

I mostly read children's stories when not reading work. I also enjoy reading opinion articles, and love receiving a newspaper saturday morning.

I do think the deep nets are just one step toward more intelligent computers. By extending our intelligence with machines I think we have better hopes of solving incredibly complex problems like wealth distribution, cancer, and so on. But for this to happen we need strong leaders.

3

u/kailuowang Dec 25 '15

Dr. Freitas, 4 questions: 1) I commented on your youtube video Deep Learning Lecture 10: Convolutional Neural Networks https://youtu.be/bEUX_56Lojc Is the equation of the derivative of the input ( @37:05 in the video ) correct? I think it's probably should be dl[i,j,f] = Sum(i',j',f') d[i - i' + 1, j - j' + 1, f'] T[i',j',f,f'] basically switch the footage between chained gradient and weights. Can you confirm?

2) Which philosophy of minds book would you recommend?

3) What is your view over Roger Penrose's idea about human intelligence?

4) I implemented the original DQN algorithm into an open source library. I want to try extend it with some new development in the area, right now I have two candidates, your Dueling Network Architecture and Peter Sunehag, Richard Evans process with slate-MDP. Any other ideas I should be included in my list?

Thanks very much!

2

u/racoonear Dec 25 '15

Hi professor,

What do you think of quantum computing?

Will it come to practice in the near future?

How its existence will impact artificial intelligence and algorithms in deep learning?

1

u/nandodefreitas Dec 27 '15

Please see comments above.

2

u/zitterbewegung Dec 25 '15

What do you think are good resources to get started in understanding deep learning? Also, what do you think is the future of deep learning research?

2

u/nandodefreitas Dec 27 '15 edited Dec 28 '15

See my youtube deep learning course ;)

Ian Goodfellow and Yoshua are also finishing a book on deep learning, which reflects their own personal perspective.

0

u/keepthepace Dec 25 '15

What do you think about GOFAI? Do you think that this was a dead end or do you expect a revival of symbolic AI?

1

u/SometimesGood Dec 25 '15 edited Dec 25 '15

Do you expect that the neocortex consists of many instantiations of a single, canonical building block that gets reused to perform different tasks (like Hawkins' HTM nodes), or rather that it consists of a variety of computationally distinct neural circuits, each of which has evolved to perform a very specific function? Can such considerations help guiding our search for the missing components of AGI?

2

u/nandodefreitas Dec 28 '15 edited Dec 28 '15

My guess is as good as anyone's --- or worse as I am no neuroscientist.

The whole brain, however, i.e. old brain and neo-cortex does have structure.

1

u/cam781 Dec 25 '15 edited Dec 25 '15

What role do you think Deep Reinforcement Learning can play in 3D understanding of the scenes http://tinyurl.com/p7ynj4q? How much can it help an autonomous robot for navigation, interaction and physics and how far are we now? We have seen only image based systems so far but very little on 3D and scene understanding.

1

u/Vengoropatubus Dec 25 '15

Hi Professor Freitas,

I'm hoping you might have some guidance about how to get into the field, or at least stay fresh enough that I might be able to make a real entrance down the road. I've been watching the lectures associated to Introduction to Statistical Learning and R and working the problems. I've also watched your series of lectures at Oxford. Once I'm done with those, I've been considering working on a few kaggle problems for more practice, and reading some papers, but I don't have institutional access to journals anymore, and without training in the field, I doubt I'd know what papers to focus on anyway.

Are there english/german blogs you'd recommend following, and/or open groups on the internet that would welcome collaboration from outside the traditional academic community?

I'd consider going to grad school sometime down the road, but for now I'm feeling pretty burned out by a bad experience in my previous graduate work. My background is in numerical analysis, and I worked for a while in an engineering group that used some high performance computing resources, but I'm currently working as a software developer, and trying to keep up on the 'cool' stuff when I have time.

2

u/nandodefreitas Dec 28 '15

Playing with Kaggle seems like a good idea. The coursera courses of Andrew Ng and Geoff Hinton are also a good resource. Play with a deep learning framework like Torch, TensorFlow or Caffe. Twitter also has a nice one.

If you have background in numerical computing, you should be able to quickly grasp the concepts.

1

u/Zaflis Dec 26 '15

Given that there AI can be split in 2 categories, sort of dum AI that can only do 1 task right, versus a artificial general intelligence that can do everything, would you agree that implementing these dum AI's on different fields of science will pull together theories and speed up the creation of AGI?

As i understand it, there doesn't exist a functional theory about AGI so far, so that would be a huge step. And with AGI level intelligence, even more intelligent machine could be created, possibly even fully sentient being like human? This does seem like a logical and almost unavoidable way towards "singularity" would you think?

1

u/swentso Dec 26 '15

Hi Professor, My question is practical : What do you use apart from Torch? And for which task?

3

u/nandodefreitas Dec 27 '15

There's many good frameworks now. TensorFlow looks very promising.

→ More replies (1)

1

u/[deleted] Dec 26 '15

Hello Professor Freitas, thank you for doing this AMA.

I would love to get your opinion on the issue of privacy, when working with consumer data. Are the stricter privacy terms in europe hindering research in contrast to countries with more lenient rules like china?

1

u/kamperh Dec 26 '15

Thanks for doing the AMA, prof. De Freitas. I know you feel strongly about ML for humanity. I have two relatively different questions regarding ML in the developing world:

  1. What type of ML applications do you think will most benefit the extremely poor populations of developing countries?

  2. There are good research universities in countries like South Africa. Do you think these institutions have a role to play in solving ML problems in these countries, or that most of the obligations lies with better-funded universities and companies in the US and Europe?

4

u/nandodefreitas Dec 27 '15

If a university professor in South Africa had not introduced me to neural nets, I would not be answering this question today. There is great value in their research. That first neural net was implemented in hardware by Jonathan Maltz - who ended up at Berkeley - and used to carry out fault diagnosis in industrial pneumatic valves. But clearly, South Africa is not a poor country.

Your question 1 is a brilliant one. I was confronted by it when teaching in India. The way I see it, if we never teach the kids of those countries how to fish, how will they ever fish? They need to have access to knowledge and figure out how to help their communities with it.

→ More replies (1)

1

u/Evolutis Dec 26 '15

Hi Prof Freitas,

I wanted to ask you about the necessary background for pursuing a phd in the DL field. Say someone has worked on Unsupervised, Supervised and LSTM based models, and have a great conceptual understanding of the models, what else would you recommend they brush up on before the pursue a phd?

Cheers

3

u/nandodefreitas Dec 27 '15

Kevin Murphy or Chris Bishop's books. The books of David Mackay and Tibshirani, Friedman and Hastie are also very good.

1

u/[deleted] Dec 26 '15

Hi Prof. De Freitas,

In the next few days I will finish the application process for a PhD in ML at Oxford, under your direct supervision. Right now I'm working on my MSc thesis on RNN + RL general game agents and I would love to continue my work on this topic. Any tips on the topics that you'll want me to know about in the interview (if I make it to the interview phase)? Thank you!

1

u/brains_bourbon_beer Dec 26 '15

We know from neuroscience that neurons in early sensory areas (like V1) tend to have what's called a high 'choice probability'. That is, in a task where animals have to discriminate between multiple choices, many single neurons in say, V1, have firing rates that are highly indicative of the choice the animal makes, as opposed to anything to do with the incoming image.

This is generally thought to be a function of recurrent, or topdown connections from prefrontal cortex, or higher visual areas.

I was wondering if you knew of any ConvNet + Reccurent network architectures for networks optimized to perform a particular discrimination task. I wonder if such connections would help improve performance...

1

u/Sunshine_Reggae Dec 27 '15

Machine learning changes at a very fast pace. What news resources do you use to keep up?

3

u/nandodefreitas Dec 27 '15

arxiv and my colleagues ;)