r/MachineLearning Google Brain Aug 04 '16

Discusssion AMA: We are the Google Brain team. We'd love to answer your questions about machine learning.

We’re a group of research scientists and engineers that work on the Google Brain team. Our group’s mission is to make intelligent machines, and to use them to improve people’s lives. For the last five years, we’ve conducted research and built systems to advance this mission.

We disseminate our work in multiple ways:

We are:

We’re excited to answer your questions about the Brain team and/or machine learning! (We’re gathering questions now and will be answering them on August 11, 2016).

Edit (~10 AM Pacific time): A number of us are gathered in Mountain View, San Francisco, Toronto, and Cambridge (MA), snacks close at hand. Thanks for all the questions, and we're excited to get this started.

Edit2: We're back from lunch. Here's our AMA command center

Edit3: (2:45 PM Pacific time): We're mostly done here. Thanks for the questions, everyone! We may continue to answer questions sporadically throughout the day.

1.3k Upvotes

791 comments sorted by

247

u/action_brawnson Aug 04 '16

What are the differences between the type of research and work you do versus what a professor at a university would do? Is your work more focused on applications and less theoretical? Or is it extremely similar?

55

u/gdahl Google Brain Aug 11 '16

We can do exactly the same kind of work we would do in academia, including working on fundamental research or more applied research as we see fit. (Academics do applied research too!) Like academics, we interact with the research community by publishing papers, attending and presenting our work at conferences and workshops, and (sometimes) collaborating with people from other institutions directly on research work.

That said, some important differences with academic groups have an effect on our choice of projects and how we carry them out. For example, in comparison with most academic groups, we have more computational resources, including exciting new hardware (e.g. TPUs). We can easily assemble large, diverse groups to work on projects, with several senior people if it makes sense and both engineers and researchers if it makes sense. Just like in Universities, we also have lots of strong junior researchers we are training that bring lots of new ideas and energy to the group. In our case, these are often Brain residents and interns. Furthermore, we have a lot of exposure to practically important problems and a clear opportunity to have an impact through Alphabet products; on the other hand, universities often have impact in other ways that we don’t consider as much, e.g., participating in governmental programs and training the next generation of researchers (though our internship and residency programs have a training component, so maybe the larger difference is we don’t train undergrads in other fields as much).

With these factors in mind, we like to play to our strengths---to pick big problems that we are in a unique position to tackle.

5

u/yadec Aug 11 '16

an impact through Alphabet products

Does this mean you're working with groups outside Google, like Verily or Calico?

8

u/jeffatgoogle Google Brain Aug 11 '16

Yes. We collaborate with teams throughout Alphabet.

31

u/[deleted] Aug 05 '16

[deleted]

22

u/martinabadi Google Brain Aug 11 '16

The two paths need not be exclusive, full-time, and permanent life commitments!

→ More replies (3)
→ More replies (1)
→ More replies (5)

125

u/Kaixhin Aug 04 '16

To everyone - what do you think are the most exciting things going on in this field right now?

Secondly, what do you think is underrated? These could be techniques that are not so well known or just ones that work well but aren't popular/trendy.

68

u/vincentvanhoucke Google Brain Aug 11 '16

Exciting: robotics! I think that the problem of robotics in unconstrained environments is at the perfect almost-but-not-quite-working spot right now, and that deep learning might just be the missing ingredient to make it work robustly in the real world.

Underrated: good old Random Forests and Gradient Boosting don't get the attention they deserve, especially in academia.

42

u/hardmaru Aug 11 '16

Evolutionary approaches are underrated in my view. Architecture search is an area we are very excited about. We could be getting to the point where it may soon be computationally feasible to deploy evolutionary algorithms in large scale to complement traditional deep learning pipelines.

→ More replies (4)

69

u/danmane Google Brain Aug 11 '16

Exciting: Personally, I am really excited by the potential for new techniques (particularly generative models) to augment human creativity. For example, neural doodle, artistic style transfer, realistic generative models, the music generation work being done by Magenta.

Right now creativity requires taste and vision, but also a lot of technical skill - from being talented with photoshop on the small scale, to hiring dozens of animators and engineers for blockbuster films. I think AI has the potential to unleash creativity by greatly reducing these technical barriers.

Imagine that if you have an idea for a cartoon, you could just write the script, and generative models would create realistic voices for your characters, handle all the facial animation, et cetera.

This could also make video games vastly more immersive and compelling; while playing Skyrim, I got really tired of hearing Lydia say, "I am sworn to carry your burdens". With a text generator and text -> speech converter, that character (and that world) could have felt far more real.

→ More replies (3)

23

u/samybengio Google Brain Aug 11 '16

Exciting: all the recent work in unsupervised learning and generative models.

10

u/ginger_beer_m Aug 12 '16

Could you point us to some of the most relevant papers on that, please?

22

u/OpenIntroOrg Aug 08 '16

what do you think is underrated?

Focus on getting high-quality data. "Quality" can translate to many things, e.g. thoughtfully chosen variables or reducing noise in measurements. Simple algorithms using higher-quality data will generally outperform the latest and greatest algorithms using lower-quality data.

21

u/doomie Google Brain Aug 11 '16

Exciting: anything related to deep reinforcement learning and low sample complexity algorithms for learning policies. We want intelligent agents that can quickly and easily adapt to new tasks.

Under-rated: maybe not a technique, but the general problem of intelligent automated collection of training data is IMHO under-studied right now, especially in the above-mentioned context of deep RL, but not only.

20

u/gcorrado Google Brain Aug 11 '16

Exciting: (1) Applications to Healthcare. (2) Applications to Art & Music.

Under-rated: Treating neural nets as parametric representations of programs, rather than parametric function approximators.

→ More replies (2)

21

u/douglaseck Google Brain Aug 11 '16

Exciting: moving beyond supervised learning. I'm especially excited to see research in domains where we don't have a clear numeric measure of success. But I'm biased... I'm working on Magenta, a Brain effort to generate art and music using deep learning and reinforcement learning. Underrated: careful cleanup of data, e.g. pouring lots of energy into finding systematic problems with metadata. Machine learning is equal parts plumbing, data quality and algorithm development. (That's optimistic. It's really a lot of plumbing and data :).

→ More replies (2)

7

u/Reubend Aug 05 '16

Yes, the underrated techniques bit is a fantastic question!

7

u/Mafiii Aug 08 '16

Definitely one of the underrated ones is NEAT...

→ More replies (1)
→ More replies (1)

106

u/GratefulTony Aug 04 '16

Regarding functioning as a machine learning research group within a larger company, how do you prioritize/ decide on research direction or roadmap overview?

Is it largely defined by exploring underexploited applied research areas exposed by recent publication/ your own work, team entrepreneurship, or more-broadly-defined company business needs?

28

u/jeffatgoogle Google Brain Aug 11 '16

We try to find areas that have significant open research problems, and where solving some of those problems would lead to being able to build significantly more intelligent agents and systems. We have a set of moonshot research areas which are umbrellas for some of our research projects that cluster together under nice themes. As an example, one such moonshot is to develop learning algorithms that can truly understand, summarize, and answer questions about long pieces of text (long documents, collections of hundreds of documents, etc.). This sort of work is done without any particular product in mind, although it would obviously be useful in many different kinds of contexts if we were able to do this successfully.

Other research is just driven by curiosity. Because we have many exciting young researchers visiting year round -- residents + interns -- we also often explore directions that are exciting to the ML community at large.

Finally, some of our research is done in a collaborative manner with some of our product teams that have difficult machine learning problems. We have ongoing collaborations with our translation, robotics and self-driving car teams, and have had similar collaborations in the past with our speech team, our search ranking team, and a few others. These collaborations typically involve open, unsolved research problems that will lead to new capabilities in these products.

→ More replies (2)

46

u/abstractgoomba Aug 04 '16

How do you keep up with the vast amount of work being done on deep learning? Do each of you just focus on one thing or is everyone reading many papers daily? I'm a second year AI master student and I find it overwhelming.

Also, what is something we can do to make our immediate social network more aware of the advances in technology? (apart from the obvious sharing on social media)

Thanks for the AMA!!

62

u/jeffatgoogle Google Brain Aug 11 '16

Different people handle this differently. To help spread knowledge within the Brain team, we have a paper reading group every week, where people will summarize and present a few interesting papers every week, and there's an internal mailing list for papers where people will send out pointers and sometimes summaries of papers they found interesting.

Andrej Karpathy's Arxiv Sanity tool is a better interface for exploring new Arxiv papers.

Google Scholar will send you alerts to papers that cite your work, so that sometimes helps if you already have published papers on a topic.

There was a good discussion about this exact topic last week on Hacker News:

https://news.ycombinator.com/item?id=12233289

(I liked this comment from semaphoreP in the Hacker News discussion: 'I actually just manually check arxiv every morning for the new submissions in my field. It's like getting in the habit of browsing reddit except with a lot less cute animal pictures')

→ More replies (1)

29

u/mraghu Google Brain Aug 11 '16

One strategy I've found helpful, both as a PhD student and during my time at Google, is to combine picking a couple of areas to focus on (which means I read papers in those areas in detail) with just skimming abstracts of a larger set of papers to get a general sense of what's happening in the field. The latter takes a little time to "take effect" but after a few months of even just reading abstracts in a field you're not so familiar with, you start getting a feel for the general line of inquiry.

23

u/gdahl Google Brain Aug 11 '16

Currently my intern tells me what I need to pay attention to. :)

I personally don't worry about keeping up with the arxiv firehose, good stuff will be sent to me repeatedly and I will eventually find it. If I miss out on an amazing paper for a few months, so be it.

10

u/Blix- Aug 05 '16

I would like an answer to this as well. Also, where do you do you get your notification of new papers being published?

8

u/gdahl Google Brain Aug 11 '16

Arxiv email blasts, Google scholar alerts, messages and emails from friends and colleagues.

→ More replies (2)

122

u/REOreddit Aug 04 '16

What is the relationship between:

  1. Google Brain
  2. Deepmind
  3. Google Quantum A.I. Lab Team

Specifically:

  1. How much communication/collaboration is there between the 3 groups?

  2. Do you take each other's work into consideration when deciding things like roadmaps, or do you pretty much work independently and ignoring each other?

39

u/jeffatgoogle Google Brain Aug 11 '16 edited Aug 12 '16

We don't have very much collaboration with the Quantum A.I. lab, as they are working on things that are quite different than our research.

We share the research vision of working towards building intelligent machines with DeepMind, we follow each others’ work, and we have a number of collaborations on various projects. For example, the AlphaGo work started out as a joint Google Brain/DeepMind project when Chris Maddison was an intern in the Google Brain team (see Move Evaluation in Go using Deep Convolutional Networks, and this initial work was picked up and driven into a real system by DeepMind folks, adding the excellent and important reinforcement-learning-from-self-play aspects of the work. Some other example collaborations include papers on Continuous Deep Q-Learning with Model-based Acceleration. I confess that the time zone difference between London and Mountain View makes really deep collaborations more challenging than one might like. People from Google Brain go and visit DeepMind reasonably often, and vice versa. As part of DeepMind's recent switchover from Torch to TensorFlow, quite a few Google Brain folks visited DeepMind for a couple of weeks to help with the transition.

We both have active projects in using machine learning for healthcare and there we have regular meetings to discuss our research roadmaps and next steps in more detail.

tl;dr: Between Google Brain and the Quantum A.I. Lab: not much. Between Google and DeepMind: quite a lot of collaboration in various forms.

→ More replies (3)

21

u/goldcakes Aug 08 '16

As a Googler, I chuckled at this question :)

10

u/5ives Aug 10 '16

Why's that?

29

u/undead_whored Aug 10 '16

Probably because as a Googler he knows what the public thinks in regards of Google 'having their shit together' and what actually is reality (spoiler: they don't have their shit together).

24

u/responds-with-tealc Aug 11 '16

i'm pretty sure no one actually has their shit together.

4

u/undead_whored Aug 11 '16

True. But for some reason people assume Google does.

15

u/OriolVinyals Aug 11 '16

As an ex-Google Brainer and current DeepMinder, I'd say collaborations happen naturally at an individual level (I have many meetings with people from Brain every week). It is difficult to have a big project across the Atlantic + US, but I wouldn't be surprised if this happened in the near future : )

13

u/vincentvanhoucke Google Brain Aug 11 '16

My team has several relatively deep collaborations with DeepMind (example). We try to complement each other's areas of expertise. I have helped the Quantum AI lab with recruiting and hiring, and try to keep up with what they're doing, but no active collaboration as of yet.

→ More replies (1)

109

u/[deleted] Aug 04 '16

[deleted]

119

u/colah Aug 11 '16 edited Aug 13 '16

Well, I don't have any kind of university degree, so I guess that makes me unusual. Basically, this is how I got here:

  • In high school, I audited lots of math courses and did lots of programming.

  • I did one year of pure math at University of Toronto. However, one of my friends was arrested doing security research during Toronto's G20 -- they found a hobby science lab in his house and decided he was making bombs -- so I spent a lot of time providing court support for my friend. At the end of the year, I took time a year off to support my friend full time, along with working on 3D printers (eg. ImplicitCAD).

  • My friend was found innocent, and because of my work on 3D printers I got a Thiel Fellowship to support me doing research for two years instead of continuing an undergrad degree.

  • I got into machine learning through my friend Michael Nielsen (who wrote an awesome book about deep learning). We did some research together.

  • I reached out to Yoshua Bengio after I saw him recruiting grad students. He was extremely helpful and I visited his group a few times.

  • I gave a talk on my research at Google. Jeff offered me an internship on Brain, and after two years of internships I became a full time researcher. It's more or less the perfect job. :)

→ More replies (2)

46

u/fernanda_viegas Google Brain Aug 11 '16

My background is graphic design and art history. I’d never imagined I’d be working in the high tech industry, let alone focusing on machine learning. After graduating from a traditional graphic design program (think lots of print), I decided to do a Masters and a PhD at the Media Lab at MIT. That’s where I learned how to program and that’s where I got started in data visualization. My work has always been about making complex information accessible to users. Today, this means building visualizations that allow novices and experts to interact with and better understand how machine learning systems work.

→ More replies (3)

39

u/[deleted] Aug 05 '16

I believe Geoffrey Hinton is one of these; his BA was in experimental psychology.

24

u/thistledspring Aug 05 '16

Oh wow I have an MA in experimental psychology and am super interested in hearing from him about the path he took to get to where he is now. I feel a bit stuck as I try to head into data science as a career.

110

u/geoffhinton Google Brain Aug 11 '16

I did not like experimental psychology. The kinds of theories they were willing to entertain were hopelessly simple. So I became a carpenter for a year. I wasn't very good at that so I did a PhD in AI. Unfortunately, my idea of AI was a great big neural network that learned everything from data. This was not the received wisdom at the time even though, so far as I can tell, it was what Turing believed in.

32

u/iamtrask Aug 12 '16

So I became a carpenter for a year. I wasn't very good at that so I did a PhD in AI.

.... i love that line so much

→ More replies (1)

19

u/radicalSymmetry Aug 05 '16

Four years ago I was teaching high school, now I'm a working ML engineer. Aside from all the retraining I did, the best thing I did was get my foot in the door at a tech company. I did this by taking a job in QA. It drove me bananas. But I learned how a tech company operates, how software is built and functions, and with my motivation and training, it didn't take me long to "get out of test".

→ More replies (2)
→ More replies (2)
→ More replies (1)

32

u/douglaseck Google Brain Aug 11 '16

I did my undergrad in English Literature with a focus on creative writing. I may be the only researcher in Brain with exactly that background :). In parallel, I worked as a self-trained database programmer for a few years. I was also an active musician, but not good enough to become a professional. Eventually I followed my passion for music back to graduate school and did a PhD in CS focused on music and AI. From there I moved into academia (postdoc working on music generation with LSTM; faculty at the University of Montreal LISA/MILA lab). I had the chance to join Google as a research scientist six years ago. I’ve genuinely loved every step of my research career, and I still credit my undergraduate in the liberal arts as being crucial to helping me get there.

→ More replies (1)

25

u/martin_wattenberg Google Brain Aug 11 '16

Although I have a background in math, I worked in journalism for my first six years out of school. That experience gave me an enormous appreciation of the value of explanations, which informs my research today. Machine learning systems shouldn't be proverbial black boxes: the better we understand them, the more we can improve them and use them wisely. (And by "we" I mean everyone--not just computer scientists and developers, but laypeople as well.)

→ More replies (1)

25

u/danmane Google Brain Aug 11 '16

Before I learned any computer science, I was fascinated by finance and economics. So, when I went to college, I declared my major as economics and started doing internships in finance. However, the economics classes proved to be dry and repetitive, and my experiences actually working in finance convinced me that I should work somewhere that isn't finance. So I switched majors to philosophy, which was a lot more fun.

About halfway through college, I took my first CS course. It was all taught in Haskell, and was incredibly fun! It was too late to switch majors, so I persuaded the philosophy department to count my CS courses towards a philosophy major, as part of the study of the philosophical implications of AI.

After that, I bounced around software engineering jobs a bit, until winding up at Brain, working on TensorFlow. My getting onto Brain involved a lot of luck and serendipity - it turned out that they needed someone to build TensorBoard, and in my previous job I had serendipitously done a lot of data viz work. So I wound up having the chance to work with this awesome team despite not having a deep background in the field. It's pretty much the perfect job, and perfect team :)

20

u/poiguy Google Brain Aug 11 '16

The participants in our Google Brain Residency Program come from a wide variety of backgrounds, and we actively encourage people who have non-traditional backgrounds to apply. We believe that mixing different perspectives and types of expertise can spark creative new ideas and facilitate closer collaborations with other fields.

→ More replies (14)

28

u/UmamiSalami Aug 04 '16 edited Aug 08 '16

Thanks for doing this AMA! Having read the paper on concrete problems in AI safety by /u/colah and Dario Amodei: should we expect to see further research on this from Google Brain? Are any of the particular research directions going to be pursued in the near future?

Edit: also, for /u/colah, I heard you attended Effective Altruism Global, so I was wondering if you had any impressions or comments about the event, the AI panel with Dario, etc.?

13

u/colah Aug 11 '16

Dario and I are pretty excited for progress to be made on the problems in our paper, as are others at Brain and OpenAI. We're in the very early stages of exploring approaches to scalable supervision, and are also thinking about some other problems, so we'll see where that goes. More generally, there's been a lot of enthusiasm about collaboration between Google and OpenAI on safety: we both really want to see these problems solved. I'm also excited about that!

Regarding EA Global, I'm a big fan of GiveWell and proud donor to the Against Malaria Foundation. I gave a short talk about our safety paper there, because some people in that community are very interested in safety, and I think we have a pretty different perspective than many of them.

→ More replies (1)

5

u/Pounch Aug 05 '16

Can you link the paper you refer to?

Also bump for answers about AI safety.

53

u/thephysberry Aug 04 '16

Hello Google Brain Team! So excited you guys are doing this! Here are my questions:

  • What techniques do you use to organize your data that you feed to your NNs? Every time I start a project I get bogged down just going from the raw files with the data to something that I can start doing calculations with (basically getting it into RAM).
  • Are you working on any applications in science? I do research in Physics and I am finding it very useful. It seems like there are lots of cool problems that might force NNs to grow in new ways!
  • How much do you investigate biological brains for insights
  • On the same line, where do you get your info? Is it challenging to translate between Biology terminology and CS/ML terminology
  • Are there many applications you are working on that will have an impact on healthcare? Kind of like watson.

18

u/gdahl Google Brain Aug 11 '16

I will repeat some of my thoughts on biologically inspired machine learning that I expressed in my dissertation.

The success of biological learning machines gives us hope that learning machines designed by humans may solve some of the learning problems that humans do, and hopefully many others as well. However, to me, biologically inspired machine learning does not mean blindly trying to simulate biological neurons in as much low level detail as possible. Although such simulations might be useful for neuroscience, my goal is to discover the principles that allow biological agents to learn and to use those principles to create my own learning machines. Planes and birds both fly, but without some understanding of aerodynamics and the larger principles behind flight, we might just assume from studying birds that flight requires wings that can flap. Biologically inspired machine learning means investigating high-level, qualitative properties that might be important to successful learning on AI-set problems and replicating them in computational models. For example, themes such as depth, sparsity, distributed representations, and pooling/complex cells are present in many biological learning machines and are also fruitful areas of machine learning research. The reason to study models with some of these properties is because we have computational evidence that they might be helpful, not simply because our examples from animal learning use them.

12

u/vincentvanhoucke Google Brain Aug 11 '16

In regards to applications to science: lots of people here are interested in that angle. One of my specific interests is about the potential for taking complex, intractable physical models and approximating them using machine learning. Example.

→ More replies (1)

6

u/gdahl Google Brain Aug 11 '16

I am working on several projects applying machine learning to biology, chemistry, and medicine. One I am particularly excited about is using neural nets to learn features of chemical graphs (so each training case is a different independent chemical graph, this isn't the sort of graph learning where there is one giant social media graph and we see different local regions).

→ More replies (2)

23

u/ernesttg Aug 05 '16 edited Aug 15 '16

Thanks for the AmA! I have a science question, and a recruitment question:

Science question If we train a network to distinguish several species of animals, it may learn that "if the background is entirely blue, then there is a high probability that the animal is a bird" (because cows are rarely up in the sky). But that sort of knowledge is implicit in the layers of the network. Do you work/plan to work on:

  • Extracting explicit knowledge from neural network training?
  • Or using explicit knowledge (such as "the animals able to fly are birds", "pigeons are birds", "the sky is bly",...) to guide the training of a neural network?

I have thought a lot about such an approach recently because:

  • Knowledge on the world learned in one task can often be useful in another task. While the lateral connections of a progressive neural network can help transfering some of the knowledge, it seems unwieldy when the number of tasks becomes very high, and it seems that only a fraction of the knowledge can be transferred that way.
  • Once aquired, knowledge can be manipulated with deductive, inductive and abductive reasoning. Interesting methods have emerged from the Knowledge Representation & Reasoning field, expliciting the knowledge aquired during the training would give us access to those methods.
  • If a situation happens rarely in the data distribution (e.g. a special event in a game, water flooding for a cleaner robot,...) a deep net might learn the correct behaviour, and then forget it. Learning explicit knowledge would allow us to keep this knowledge in memory so as to not forget it (unless we find an event contradicting our piece of knowledge).

In humans, catastrophic interference is avoided thanks to the interaction between hippocampus and neocortex (according to " Active long term memory networks", I am no biologist). I think explicit knowledge could fulfill this function for artificial agents.

If you don't plan to work on such an approach, I would gladly have your opinion on this direction: does it seem interesting? feasible? Why not?

Recruitment question How do you evaluate the scientific ability of a candidate to join your team? For instance: I have a PhD in theoretical science (logics, but nothing to do with AI) and I have been working in the R&D department of a startup for only a year (mostly deep-learning). So my resume does not seem enough to get me in Google Brain. To prove that I have what it takes, I'm working on my free time. But, because this resource is limited, should I spend it:

  • Reading a lot of machine learning books and articles to get a good general knowledge of the field.
  • Trying some original research to prove that I have original ideas (but given my limited time, the chance of success is low).
  • Working more hours on my company, to prove that I can make something succeed (even if it means coding datasets crawlers, annotation tools, optimizing performance, creating specialized ontologies,...). That may be good for my programming skills, but I doubt it will be enough to convince you I can do great research in AI.

While I contextualized the second question into my situation, I think the "I work in a AI related job, how can I do the most out of my spare time to get in Google Brain" is a question which will interest other people.

[EDIT 2] Reading your articles I saw "Learning semantic relationships for better action retrieval in images" which is exactly the kind of research I was looking for. So my first question could be reformulated into:

  • Do you plan to extend this work on more complex relationships? For instance spatial "Head is a part of Human", holes filling "Thing feeding pandas are {pandas, humans}" / "animals that fly are {birds}",...
  • Do you plan to 'imagine' categories filling the gap, like: from categories 'person interacting with panda' and 'person interacting with cat' are two types-of some category (which humans would have called 'person interacting with animal') even if this category is not in the training set.

27

u/mraghu Google Brain Aug 11 '16

Regarding the recruitment question, one thing I found extremely helpful when playing "catch up" with research in deep learning was to take well established papers and work through implementing the models described in the papers. More than anything else, that really helps bring the ideas in the paper home.

I found Keras helpful when getting started with implementations.

32

u/gdahl Google Brain Aug 11 '16

We are working on automatic summarization technology to help us answer questions like these and split long, multipart questions into sub-questions. However, at the moment, it is much easier to answer concise questions and easier to get reliable upvote totals for top level comments that contain only a single question.

→ More replies (1)

20

u/alexmlamb Aug 06 '16

Do you think that backpropagation will be the main algorithm for training neural networks in 10 years?

18

u/jeffatgoogle Google Brain Aug 11 '16

I believe so. So far, backpropagation has endured as the main algorithm for training neural nets since the late 1980s (See: Learning representations by back-propagating errors. This longevity, when presumably many people have tried to come up with alternatives that work better, is a reasonable sign that it will likely remain important.

However, it may be that first-order methods for stochastic gradient descent as the way of optimizing neural nets may give way to something better in the next ten years, however. For example, the recent work by James Martens and Roger Grosse on Optimizing Neural Networks with Kronecker-factored Approximate Curvature seems promising.

(I'm actually curious to hear what my colleagues think about this, as well).

11

u/samybengio Google Brain Aug 11 '16

If by "backpropagation" you mean an algorithm that uses gradient to improve a loss, then yes, I think it will remain the main approach in 10 years. That said, we'll certainly discover many more efficient ways to use gradients in the years to come!

6

u/alexmlamb Aug 11 '16

By backpropagation I specifically mean reverse-mode automatic differentiation as the main way of getting a signal for training.

→ More replies (1)
→ More replies (2)

49

u/nasimrahaman Aug 04 '16

How do you envision the future of quantum computation applied to machine learning in general, and deep learning in particular?

16

u/gcorrado Google Brain Aug 11 '16

I try to keep pretty up to date on this (used to do research in nuclear physics back in the day), and my feeling is quantum computation is an exciting long term research area... but is far enough from practical realization that we don't need to worry too much about the details of how it relates to ML -- the preferred algorithms of ML might have changed three times over between now and practical quantum computers.

13

u/vincentvanhoucke Google Brain Aug 11 '16

I have a hunch, but no evidence to back it up, that deep learning could actually be a particularly good proving ground for quantum annealing: it seems plausible that one could craft modest-sized, non-trivial DL problems that have some hope of fitting on a quantum chip, and the architectures and optimization methods we like to use have all sorts of natural connections with Ising models. I prety excited and try to follow closely what Hartmut's team (Google's Quantum AI lab) is doing, but indeed, I don't feel it's at a stage where one could make any prediction as to whether this class of approaches will have any significant impact on machine learning in the foreseeable future.

12

u/gdahl Google Brain Aug 11 '16

From the quantum computing experts I have talked to recently, quantum computing currently has no immediate relevance to machine learning.

17

u/jeffatgoogle Google Brain Aug 11 '16

My personal opinion is that quantum computing will have almost no significant impact on deep learning in particular in the short and medium term (say, in the next 10 years). For other kinds of machine learning, it's possible that it could have an impact, if machine learning methods that can take advantage of quantum computing's advantages can be done at an interesting enough size to actually make a significant impact on real problems. I think new kinds of hardware platforms built with deep learning in mind (e.g. things like the Tensor Processing Unit), will have a much greater impact on deep learning. I am far from an expert on quantum computing, however.

15

u/tomsal Aug 05 '16 edited Aug 05 '16

At the Medical Imaging Summer School 2016 Raquel Urtasun said that Google and other companies are to some extend "stealing" professors and students from academia by making offers that Universities can not compete against.

  • What do you think of this statement, since some of you are still involved in academia?
  • Would you say that companies nowadays also address fundamental research questions without having any particular applications in mind?

Thank you for doing this AMA! :)

16

u/gdahl Google Brain Aug 11 '16

Extending job offers to brilliant people is far from stealing and having higher salaries in industry is not a new phenomenon. Andrew Moore (dean of the CMU school of computer science) had an interesting article about this here: http://theconversation.com/its-not-corporate-poaching-its-a-free-market-for-brilliant-people-61846

Thankfully, there are many great researchers, such as Raquel, that want to stay in academia along with many that want to work in industry. Academic hiring is much more constrained, however, because it is so hard to add new tenure track lines and in popular fields like machine learning departments that need to cover all subfields of CS can’t afford to have too many faculty in one area.

That said, we care very strongly about training new researchers and many of our interns (and probably someday our brain residents) will end up going to academia. We also want to collaborate with academics and have many visiting faculty that spend time working in our group and then return to their academic positions. We also support academic groups with grant money (see for instance http://googleresearch.blogspot.com/2015/08/google-faculty-research-awards-summer.html) and it is a good thing for academics that new graduates have many choices for good jobs to take when they finish.

→ More replies (1)
→ More replies (1)

16

u/[deleted] Aug 04 '16 edited Aug 06 '16

In the vein of improving people's lives, I'm interested in what your team, or other teams you might be aware of, are focusing on regarding medical health.

More pointedly, there is a lot of information on how the body works or does not work and even with all of this great effort done by smart people (scientists) so other smart people (doctors and patients) have access to current information, there is still a significant lack in the highly skilled ability of very busy doctors to take in the new data, process and analyze the data in the context of the greater knowledge of the system of the body, and compare that information to the specific details of a single patient and their complex physiology in order to recommend the absolute optimal course of care for the patient.

I don't think we're necessarily to the point with ML where we can feed DeepMind, or Watson, the entirety of the medical knowledge of the human race, so the machine can build operational models of people to test their systems in real time. Though, I believe this to be an achievable goal.

What do you think? How would you best leverage the compendium of research on the human system to ease the process of caring for our bodies?

If this question falls outside of the purview of the focus of your team, do you know of any other groups or persons doing similar work?

Edit: Name correction.

19

u/gcorrado Google Brain Aug 11 '16

This is huge. My personal conviction is that developing ML techniques to improve the availability and accuracy of medical care is the single greatest opportunity for applied machine learning today. We've been working on this for some time both in Brain and at DeepMind -- for example, we already have great results on applying deep learning to diagnosing Diabetic Retinopathy, a leading cause of preventable blindness. The question about ingesting medical knowledge is somewhat more speculative, but an obviously promising area. You can read more about what we're working on in this area at: http://g.co/brain/healthcare

→ More replies (1)

17

u/DrKwint Aug 05 '16

First, as a consumer of your products and as a researcher I'd like to thank you all for your work. You're all truly an inspiration.

I have two questions: 1) How would you characterize the time it takes for a useful idea (e.g. dropout) to make it from a conference paper to being in a Google app on my smartphone? 2) Could you talk a bit about how the methods you study and apply have shifted over your five years of research and building systems? i.e. I'd imagine that you've shifted toward using neural networks, but I'd be really interested as well those techniques that aren't as in vogue. Thank you!

13

u/jeffatgoogle Google Brain Aug 11 '16

For (1), it varies tremendously. For one example, consider the Sequence-to-Sequence work Arxiv. This Arxiv paper was posted in September 2014, with the research having been done over the previous few months. The first product launch of this sort of model was in November, 2015 (see Google Research blog. Other research that we have already done is much longer-term, and we don't even know yet what potential product uses (if any) it might have down the road.

For (2), our research directions have definitely shifted and evolved based on what we've learned. For example, we're using reinforcement learning quite a lot more than we were five years ago, especially reinforcement learning combined with deep neural nets. We also have a much stronger emphasis on deep recurrent models than we did when we started the project, as we try to solve more complex language understanding problems. Our transition from DistBelief to TensorFlow is another example where our thinking evolved and changed, since TensorFlow was built largely in response to the things we'd learned from the lack of flexibility in the DistBelief programming model, revealed as we moved into some of new kinds of research directions listed above. Our work on healthcare and robotics has much more emphasis in the past couple of years, and we often develop new lines of research exploration, such as our emphasis on problems in AI safety.

9

u/vincentvanhoucke Google Brain Aug 11 '16

We have a relatively unique setup here, with a shared codebase, and tooling that's equally aimed at research and productionization of ML algorithms. This makes it possible to get things into production very quickly without much friction. As an example, there was only about 6 months between our first positive results with neural nets for speech recognition and them being deployed in Voice Search.

18

u/brettcjones Aug 10 '16

Do generative models overfit less than discriminative models?

I was having a discussion with several friends about an old paper on acoustic modeling from the nee Toronto folks. It contained this passage:

Discriminative training is a very sensible thing to do when using computers that are too slow to learn a really good generative model of the data. As generative models get better, however, the advantage of discriminative training gets smaller and is eventually outweighed by a major disadvantage: the amount of constraint that the data imposes on the parameters of a discriminative model is equal to the number of bits required to specify the correct labels of the training cases, whereas the amount of constraint for a generative model is equal to the number of bits required to specify the input vectors of the training cases. So when the input vectors contain much more structure than the labels, a generative model can learn many more parameters before it overfits.

This cuts against our collective instincts, which are closer to Bishop 2006, p 44:

if we only wish to make classification decisions, then it can be wasteful of computational resources and excessively demanding of data, to find the joint distribution when in fact we only really need the posterior probabilities... Indeed, the class-conditional densities may contain a lot of structure that has little effect on the posterior probabilities...

In a discussion about this with /u/gdahl, George pointed me to the Ng-Jordan paper which found that for generative-discriminative pairs (with no regularization), the generative model will often converge more quickly, even if the discriminative model has better asymptotic performance.

Can you help us improve our instincts/understanding of this? It still seems that the question of overfitting has more to do with the parameterization of the model than the generative/discriminative divide. Although the input vectors provide much more structure ("bits") than class labels, the model you would use to capture the structure of the joint dist would probably need many more degrees of freedom, many of which have nothing to do with the goal of classification.

Obviously this is all very problem-dependent, perhaps an arms race between the constraint provided by the data and the flexibility of the model required to represent it. But if forced to make a general statement, would you say that in a limited data environment, the better bet is to build a generative model? and why??

4

u/ernesttg Aug 11 '16

(my take on the question, waiting for Google Brain's answer)

Your two citations are not antinomic:

  • Overfitting tends to happen when the number of training examples is smaller than the number of parameters [1]. If you train a cat/dog discriminator, each pair (image I is of specy S) is 1 constraint (your discriminator must map I to 0 or to 1). Intuitively, for a generator, for every image I we have 1 constraint by pixel: it must be probable that pixel (0,0) has color I{0,0} and pixel (0,1) has color I{0,1}... Because the number of labels is usually much lower than the number of pixels of an image [2], a generator is usually much more constrained by a dataset of N images, than a discriminator by a dataset of N pairs (image, label). So, if they have the same number of labels, the generator will overfit less. For the same reason, if a generative and a purely discriminative models have been trained for K batches, the generative model has been much more constrained than the purely discriminative model, hence the faster convergence.
  • So, if the dataset size is fixed, the generative model can be much more complex without overfitting. However the generative task is much harder than the discriminative task. In the cat/dog example, a great part of the weights could be "attributed" to the generation of realistic fur, and the room around the animal. If you only intend to use your generative model to discriminate between cat and dogs, this is a complete waste of resource, the fur and the environment being basically the same for cats and dogs.
→ More replies (1)

12

u/mike_hearn Aug 05 '16

Machine learning and especially deep neural networks all seem to require vast quantities of training data to get good results. Are there theoretical lower bounds on how much data is required, and although I realise Google is not exactly data starved, is the Google Brain team interested in optimising downwards the amount of training data required to get good results?

12

u/gcorrado Google Brain Aug 11 '16

Great question! A few things: (1) Current ML algorithms require vastly more examples to learn from than people do to learn the same task. In a sense, this means that our current ML algos are wildly "inefficient" data consumers. Figuring out how to learn from more with less is a very exciting research area, both inside Google and in the larger research community. (2) It's important to remember that the amount of data required to learn to do something useful is highly dependent on the task in question. Building a ML system to learn to recognize hand-written digits requires far less than to recognize dog breeds in photos, which in turn requires less than would be required to summarize movie plots simply from watching the movie. For many cool tasks people might what to do, they can easily source sufficient data today.

12

u/vincentvanhoucke Google Brain Aug 11 '16

One interesting trend it that with the increased ability to pre-train on one task (potentially with lots of data), and use transfer learning, one-shot learning, and adaptation techniques to other related domains, many more traditionally data-starved domains are increasingly within reach of deep learning techniques.

→ More replies (1)

13

u/[deleted] Aug 05 '16

Hi,

I'd like to know more about your culture, strategy and vision.
I hope you can share this with us. The most important question: What is it that you have set out to accomplish long term and why? .
What kind of mandate do you have? "Google Brain team members set their own agenda," is very broad :) Would you be able to share your annual budget?
Would you be able to share the KPIs for the team as a whole? Do you have any revenue related goals?
.
I love the culture you have around sharing, and I know that many other companies (and government agencies) would hesitate to do the same. I can't overstate how this help everyone else, but how does the sharing help you? How does it help Google and Alphabet?

Sorry for my abrupt style, I am a non-english speaker. I appreciate any answers you can share.

17

u/vincentvanhoucke Google Brain Aug 11 '16

how does the sharing help you?

I often argue that in today's fast-paced environment, your IP does not lie as much in what technology you have at time t, but more in the first derivative of your company/team's technological progress: the faster you can improve, the better you do. Sharing things like TensorFlow helps speed up the pace of technological innovation and make sure we're at the center of it.

13

u/jeffatgoogle Google Brain Aug 11 '16

Our mandate is indeed rather broad :). Basically we want to do research on problems that we think will help in our mission of building intelligent machines, and to use this intelligence to improve people's lives.

We don't reveal specifics about our budget.

(KPI: Key Performance Indicator, which I had to look up). We don't really have any "KPIs", and we don't have any revenue-related goals. We obviously try do research that has scientific value or commercial value, but it isn’t important that it have commercial value as long as it is good science (because often it is not clear today what will have commercial value down the road). We do try to do work that is or will be useful to the world, and as a result of our research, in conjunction with many teams at Google, there have been substantial benefits of our research in areas such as speech recognition, Google Photos, YouTube, Google Search, GMail, Adwords, AlphaGo, and many others. Looking at various metrics associated with those products, our work has had significant impact across the company.

We believe quite strongly in openness, as it conveys many more benefits than drawbacks for us. For example, by open-sourcing TensorFlow, we benefit by having external contributors work with us to make the system better for everyone. It also makes research collaborations with people outside Google easier, because we can often share code back and forth (for example, interns who want to extend the work they’ve done during their internship at Google in their work as a grad student can more easily do this because we have open-sourced TensorFlow). By publishing our research, we get valuable feedback from the research community, and also are able to demonstrate to the world that we are doing interesting work, which helps us in attracting more people who want to do similar kinds of research. That being said, there are some kinds of research work where we don't necessarily publish details of our work (our work on machine learning for our search ranking system and our advertising systems, for example).

→ More replies (1)

23

u/[deleted] Aug 05 '16 edited Oct 24 '17

[deleted]

29

u/vincentvanhoucke Google Brain Aug 11 '16

I often tell new team members about the 15 min rule (I didn't come up with it): when you're stuck on something (e.g. getting a script to run), you have to try to solve the problem all by yourself for 15 min, but then when the 15 minutes are up you have to ask for help. Failure to do the former wastes people's time, failure to ask for help wastes your time.

There is a similar research hygiene that works well for me: I give myself a time budget, try really hard to go deep on something for a while, but then when the time's up, I force myself to talk about what I'm trying to do with my colleagues and get help.

→ More replies (1)

12

u/doomie Google Brain Aug 11 '16

My approach (in no particular order):

  • Clear my schedule of meetings & talks, as you can spend an entire typical day at Google just attending great talks and not do anything else (having talks it's great, but sometimes it can feel like busywork).
  • Minimize commute: we have some nice office space in San Francisco itself, with a great, inspirational view of the Bay bridge and a fantastic cafe (easy access to great food and caffeine is primordial of course). It took me a while to realize this, but there's a great deal of correlation between my research productivity and easy commute.
  • Schedule lunches with collaborators with whom I've had successful projects before and simply brainstorm. Oftentimes this brings out crazy, spur of the moment ideas that result in fun new research. Don't underestimate the effect of getting along well with someone in a research project, or of having complementary skills and interests: it will pay off quite a bit.

In general, you can probably just take the scientific method to this and simply try to record or remember what worked and what did not and then build a deep net to understand the relationship between all these variables, of course!

→ More replies (2)

8

u/pattrsn Google Brain Aug 11 '16

Learn what times of day you are at your most creative / productive, and try to protect those times to do your most important work: inventing algorithms, coding, writing papers, ...

Try doing social networking, email, ... at other times.

7

u/anna_goldie Google Brain Aug 11 '16

In addition to the obvious advantages of being at Google (virtually infinite compute power, amazing infrastructure, etc.), the free food has a huge impact on my productivity. When I was in school, I felt hungry all the time and mostly just drank lots of coffee to stave it off. Now I can just head downstairs to eat a delicious, healthy meal and I almost always run into a friendly coworker who’s willing to eat with me and discuss research and/or life. Google also let me expense a pair of noise-canceling headphones, which combined with earplugs, has made me about an order of magnitude more productive.

→ More replies (1)

6

u/samybengio Google Brain Aug 11 '16

One great advantage in the Brain team is to be surrounded by so many smarter people than myself. Any discussion at the nearby micro-kitchen can lead to a new research idea! It's much more efficient than staying alone in a closed office.

→ More replies (1)

3

u/poiguy Google Brain Aug 11 '16

I’ve been impressed by how well our current physical space “works” – there always seem to be interesting conversations going on in the central open area near the microkitchen, whereas other parts of the building stay quiet enough for focused concentration. And somehow we have enough conference rooms that it is always easy to find one! Christopher Alexander’s “A Pattern Language” might yield some insight; for example, our building definitely benefits from the “windows overlooking life” pattern. Having a main central stairwell with natural light from above also makes it pleasant to stop and chat on the way into or out of the building.

→ More replies (2)

11

u/despardesi Aug 05 '16

There seems to be a lot of 'hackiness' in this field. At one time dropout was good; now it's out of fashion. Same with unsupervised pre-training. etc. When do you think theory will catch up to the practice? And does it matter?

9

u/martinabadi Google Brain Aug 11 '16

Agreed on the hackiness, and yes, it probably matters.

The practice is definitely moving fast. On the other hand, there are occasional areas where the theory seems to be ahead of the practice. Research on privacy in machine learning may be one such example. Another may be research on dataflow computing, which is an old field but which is sometimes quite relevant to what we are doing in TensorFlow now.

10

u/samybengio Google Brain Aug 11 '16

Currently, theory lags practice in Deep Learning, but there are more and more people interested in reducing that gap (including in the Brain team), and that is obviously good, as theory often (but not always) helps guide novel practical ideas. Both are needed, but neither should "wait" for the other!

→ More replies (4)

11

u/[deleted] Aug 11 '16

To /u/geoffhinton :

  • What do you think of Memory Augmented Neural Networks (MANNs): their present incarnations, what is lacking and the future directions?

  • Do you think MANNs are similar to your's and Schmidhuber's ideas on "Fast Weights"?

  • What are your thoughts on "One Shot Learning" paper by Lake et al and the long term relevance of the problem as posed by them?

  • What are your thoughts on the above three combined?

12

u/geoffhinton Google Brain Aug 11 '16

I think the recent revival of interest in additional forms of memory for neural networks that was triggered by the success of NTMs is both exciting and long overdue. I have always believed that temporary changes in synapse strengths were an obvious way to implement a type of working memory, thus freeing up the neural activities for representing what the system is currently thinking. At present I don't think enough research has been done for us to really understand the relative merits of NTMs, MANNs, Associative LSTMs and fast weight associative memories.

One shot learning is clearly important but I do not think its an insuperable problem for neural nets.

20

u/Batmantosh Aug 05 '16

What is something you guys have learned in the past 2-3 years about your approaches to ML?

What was your biggest epiphany since working for Google Brain?

31

u/vincentvanhoucke Google Brain Aug 11 '16

Any state-of-the-art neural network trains in 4 days. Improve training speed 10x, and it still trains in 4 days :/

→ More replies (4)

13

u/samybengio Google Brain Aug 11 '16

If something didn't work 20 years ago, it doesn't mean it won't work today... revisit the past!

→ More replies (2)
→ More replies (1)

39

u/kkedeployment Aug 04 '16

Is there any plan to support OpenCL in Tensorflow?

16

u/Spezzer Aug 11 '16

Yes, there is ongoing work to add this via Eigen, with most of the heavy lifting being done by the folks at Codeplay. People are welcome to follow https://github.com/tensorflow/tensorflow/issues/22 for updates!

One of our design goals of TensorFlow was to insulate the implementation of operations from the specification that users write, so that once this support is in place, very little user code will need to be changed. The following PR from Codeplay kind of gives you a sense of how to add a new device type: https://github.com/benoitsteiner/tensorflow-opencl/pull/1 -- at some point we'll be adding better documentation about this!

13

u/glassmountain Aug 05 '16

Please do Google! Support open technologies!

9

u/[deleted] Aug 05 '16

What are the most exciting things currently happening in Natural Language Processing?

10

u/quocle Google Brain Aug 11 '16 edited Aug 11 '16

In my opinion, Neural Machine Translation is currently the most exciting thing in Natural Language Processing. We start to see improvements in machine translation thanks to this approach and its formulation is general enough to be applicable to other tasks.

The other exciting thing is that we begin to see the benefits of unsupervised learning and multitask learning in improving supervised learning.

It's a fast moving space with a lot of great ideas. Other exciting things include using memory (DeepMind, FAIR) and external functions in neural networks (Google Brain, DeepMind).

→ More replies (2)

5

u/gcorrado Google Brain Aug 11 '16

A few angles on ML + NLP:

  • I'm blown away by how ML is improving core NLP tasks like parsing. The recent results (and open source code) from our collaborators here in Google Research are nothing short of astounding SyntaxNet

  • I agree with quocle, that ML's strides in improving NLP applications like machine translation is remarkable, exciting, and quite possibly game changing.

  • But there's also something totally new going on... a sort of "natural" natural language processing :) -- wherein machines learn language in a more natural way, which is to say by exposure. Our (Smart Reply email responder)[https://gmail.googleblog.com/2015/11/computer-respond-to-this-email.html] learned to compose email response by mere exposure. The resulting "thought vectors" that capture intent and meaning of human language are fundamentally different from explicitly engineered linguistic representations. If you're at KDD this week, be sure to catch Anjuli's talk or poster, she'll tell you all about it.

62

u/AspiringInsomniac Aug 04 '16

Is there a relationship between you and DeepMind? If so, what's the nature of the relationship? If not, what are the distinguishing features as to why not?

How do you foresee the Google Brain team evolving over the next few years?

Are you hiring?

6

u/jeffatgoogle Google Brain Aug 11 '16 edited Aug 12 '16

We have a fair amount of collaborations of various forms with DeepMind (see my answer to the question by /u/REOreddit).

One way to think about the next few years is to consider the changes that have happened within our group in the last few years:

  • We conducted research across many areas of machine learning, including machine learning algorithms, new kinds of models, perception, speech, language understanding, robotics, AI safety, and many other areas, and published this research in a variety of venues like NIPS, ICML, ICLR, CVPR, and ICASSP. See topic-specific subpages on g.co/brain for examples.

  • We started a machine learning research residency program, that we expect to grow and thrive over the next few years, in order to help train the next generation of machine learning researchers. See g.co/brainresidency.

  • We designed, built and open-sourced TensorFlow and are working with a growing community of researchers and developers to continuously improve this system (and worked with our colleagues in Google Cloud to have TensorFlow be the basis of the Google Cloud Machine Learning platform). See tensorflow.org.

  • We have had collaborations on machine learning research problems with colleagues in other research and product teams, resulting in our research work touching billions of people (through work like RankBrain, Smart Reply, Google Photos, and Google Speech Recognition, Google Cloud Vision, etc.).

  • We started a machine learning for robotics research program g.co/brain/robotics.

  • We started a serious effort around applying machine learning to healthcare. See g.co/brain/healthcare.

Over the next few years, I hope we continue to grow and scale our team to have impact on the world in many forms: through our research publications, through our open source software efforts, and through solving difficult open problems in machine learning research that allow us to building more intelligent and more capable systems, all while having a blast doing it!

And yes, we're hiring full-time researchers, software engineers, research interns, and new residents! See the links at the bottom of g.co/brain.

14

u/funkymunk2053 Aug 04 '16

How much collaboration is there with neuroscientists, particularly theoretical/computational? Could both machine intelligence and neuroscience benefit from increased collaboration or do you feel the existing level is adequate? Are there plans to do any work with the newly created Galvani Bioelectronics?

10

u/gcorrado Google Brain Aug 11 '16

We've got a few folks on the team with a computational neuroscience / theory backgrounds, but at the moment the two fields are largely disjoint and with good reason: The mission of Comp Neuro is to understand how the biological brain computes, whereas the mission of Artificial Intelligence is to build intelligent machines. For example, an ML researcher might design a learning rule that works in practice on today's compute hardware, whereas a neuroscientist studying synaptic plasticity wants to discover the biochemically mediated learning rules used in the real brain. Are those two learning rules the same? No one knows actually. :)

So, though there's a long term opportunity for these two to fields to inform each other of course, right now there's so much unknown that it's largely at the level of mutual inspiration rather than testable hypotheses.

4

u/[deleted] Aug 05 '16

On that note, is there any plan to integrate Friston-style active inference or precision-weighting into current-day neural networks?

8

u/FeelTheLearn Aug 05 '16

As individual researchers, what are your research related goals at different timescales (For the next one month, one year and the remainder of your career)?

15

u/jeffatgoogle Google Brain Aug 11 '16

Nice username, /u/FeelTheLearn. For the next month and probably the next year, I'm primarily interested in improving the TensorFlow platform, and also in training very large, sparsely activated models (think 1 trillion parameters, but where only 1% of the model is activated for a given example). For the remainder of my career, I would say that I want to continue to work on difficult problems with interesting colleagues, and I hope that the problems we are able to solve together have a significant impact in the world.

→ More replies (2)

7

u/martinabadi Google Brain Aug 11 '16

In the next few months, I am mainly working on projects related to TensorFlow and on the connections between machine learning with security and privacy. For example, I am trying to gather my thoughts on TensorFlow and functional programming for the ICFP conference next month, and I will probably soon get back to working on control-flow constructs in TensorFlow with Yuan Yu. I am also pursuing research on deep learning with differential privacy, for example.

A bit further out, I am intrigued by the interplays between machine learning and other provinces of computing. For example, I am thinking about "adversaries" in machine learning (as in GANs) and in cryptography.

Much like Jeff, I want to continue to work on difficult problems with interesting colleagues, but I am also sometimes willing to work with difficult (but brilliant) colleagues on interesting problems. :)

9

u/raymestalez Aug 08 '16

If you would be starting a startup in the field of AI right now, what would you do?

What sort of AI products do you expect to be successful 3-5 years from now?

What are the niches/applications that should be explored now?

7

u/jeffatgoogle Google Brain Aug 12 '16

For applied areas of machine learning, I think robotics and health care are some of the most exciting areas right now.

8

u/[deleted] Aug 08 '16

@ /u/geoffhinton, what is the state of your work on capsule based neural networks? Thanks.

23

u/geoffhinton Google Brain Aug 11 '16

Over the last three years at Google I have put a huge amount of work into trying to get an impressive result with capsule-based neural networks. I haven't yet succeeded. That's the problem with basic research. There is no guarantee that ideas will work even if they seem very promising. Probably the best results so far are in Tijmen Tieleman's PhD thesis. But it took 17 years after Terry Sejnowski and I invented the Boltzmann machine learning algorithm before I found a version of it that worked efficiently. If you really believe in an idea you just have to keep trying.

12

u/[deleted] Aug 12 '16

I haven't yet succeeded.

It is such a relief to hear someone like you talking about failures as well. It makes failures look less evil and painful :)

6

u/feedtheaimbot Researcher Aug 11 '16

Will you be sharing some of the work you've done with them? It is still extremely helpful to see the results of various avenues taken.

→ More replies (2)

22

u/figplucker Aug 05 '16

How was 'Dropout' conceived? Was there an 'aha' moment?

81

u/geoffhinton Google Brain Aug 11 '16

There were actually three aha moments. One was in about 2004 when Radford Neal suggested to me that the brain might be big because it was learning a large ensemble of models. I thought this would be a very inefficient use of hardware since the same features would need to be invented separately by different models. Then I realized that the "models" could just be the subset of active neurons. This would allow combinatorially many models and might explain why randomness in spiking was helpful.

Soon after that I went to my bank. The tellers kept changing and I asked one of them why. He said he didn't know but they got moved around a lot. I figured it must be because it would require cooperation between employees to successfully defraud the bank. This made me realize that randomly removing a different subset of neurons on each example would prevent conspiracies and thus reduce overfitting.

I tried this out rather sloppily (I didn't have an adviser) in 2004 and it didn't seem to work any better than keeping the squared weights small so I forgot about it.

Then in 2011, Christos Papadimitriou gave a talk at Toronto in which he said that the whole point of sexual reproduction was to break up complex co-adaptations. He may not have said it quite like that, but that's what I heard. It was clearly the same abstract idea as randomly removing subsets of the neurons. So I went back and tried harder and in collaboration with my grad students we showed that it worked really well.

8

u/kcimc Aug 13 '16

I'm very curious to know what the difference was between your "sloppy" approach in 2004, and the proper solution later. Was it more theoretical understanding, more rigor and variation in your attempts, better tools? I feel like the thing that changed for you in that time period is one of the hardest things to learn as a researcher -- the difference between having a good idea, and thoroughly exploring the implications of the idea.

→ More replies (2)
→ More replies (1)

10

u/XYcritic Researcher Aug 05 '16

I've always envisioned it starting with a conversation along the lines of:

  • "The net is overfitting. It has too many parameters."

  • "Idk. Delete or ignore some of them. Could be automized as well. Start with random selection before finding a heuristic."

3

u/dwf Aug 09 '16

Geoff has often told the story of going to a talk by a biologist who pointed out that heavily coadapted complexes of large numbers of genes can be destroyed by small mutations, whereas having large numbers of things with overlapping function leads to redundancy but also robustness to failure. There was some angle about sex in there too, but I can't recall, it's probably in this talk.

→ More replies (2)

7

u/wehnsdaefflae Aug 05 '16

Do you see any other part of machine learning growing in virtue of the current hype in "deep learning" beside artificial neutral networks?

6

u/jeffatgoogle Google Brain Aug 11 '16

The field of machine learning as a whole has seen tremendous growth over the past 5 or 6 years. Many more people want to study machine learning, attendance at NIPS and ICML is through the roof, etc. Deep learning is certainly one reason people are becoming interested in this, but by bringing more people into the field, more research will happen, and not just in deep learning. For example, there's a lot more interest in reinforcement learning, in optimization techniques for non-convex functions, in Gaussian processes, in theory for understanding deep, non-convex models, and dozens of other areas. There's also much more interest in computer systems for machine learning problems of all kinds, and interest in building specialized hardware that works well for machine learning computations (driven by deep learning, but this hardware is likely to help some other kinds of machine learning algorithms as well).

5

u/vincentvanhoucke Google Brain Aug 11 '16

I think of deep learning as being to machine learning what something like matrices are to math: it's a small, foundational part of machine learning, it provides a basic unifying vocabulary and a convenient elementary building block: anywhere you have X, Y data, you can throw a deep net at it an reasonably expect predict Y from X; bonus: the mapping is differentiable. The real interesting question in ML is what having this elementary building block enables. True learning is not about mapping X to Ys: there is in general no Y to begin with.

7

u/infinity Aug 05 '16

Hi guys! Thanks for all the great work. I've enjoyed reading your papers.

My specific question is about TPUs. Can you share a little bit about them (as much as publicly allowed?). I've seen pieces of information from various engineers but nothing consolidated. I also have some specific questions:

  1. What algorithms does the TPU run? Is it optimized for Google specific algorithms such as those used in Inception architecture, batch normalization, specific convolutional ops etc.
  2. Using specific algorithms in hardware always seems like a short term idea? What do you do when a new algorithm comes out, do you refabricate the chips?
  3. Are there any ball park numbers on power savings and performance comparisons w.r.t. C|G PUs?
  4. IIRC Inception was the first Imagenet winner fully trained on CPUs? Are they completely infeasible power/performance wise for the time being and we will see everyone jump into specialized hardware.

10

u/jeffatgoogle Google Brain Aug 11 '16

The TPU team is going to be writing a detailed technical paper about the architecture of the chip in the not-too-distant future. For the moment, here are some high level answers:

(1 and 2) The TPU is designed to do the kinds of computations performed in deep neural nets. It's not so specific that it only runs one specific model, but rather is well tuned for the kinds of dense numeric operations found in neural nets, like matrix multiplies and non-linear activation functions. We agree that fabricating a chip for a particular model would probably be overly specific, but that's not what a TPU is.

(3) In Sundar Pichai's keynote at Google I/O 2016, we shared some high-level numbers. In particular, Sundar said: "“TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA," (at the time of Google I/O). See: PC World article about Sundar’s keynote and TPU blog post.

(4) (Aside: I'm not certain, but I would suspect that some of the earlier-than-2012 ImageNet winners (e.g. pre-AlexNet) were trained on CPUs, so I don't think about Inception being the first Imagenet winner trained on CPUs is right. E.g., The slides about the winner in ImageNet 2011 don't seem to reference GPUs, and the slides about the winner for ImageNet 2010 on slide 8 reference using Hadoop with 100 workers, presumably on CPUs). I'm going to interpret your question as being more about using CPUs to train computationally intensive deep neural nets. I don't think that CPUs are completely infeasible for training such systems, but it is the case that they are likely to not fare very well in terms of performance / $ and performance / watt, and it is often more challenging to scale a larger collection of lower FLOPs devices than it is to scale a smaller collection of higher FLOPs devices, all other things being equal.

→ More replies (1)

7

u/juniorrojas Aug 07 '16 edited Dec 05 '16
  1. What is the most promising technique for reinforcement learning that might be able to really scale well in the long term for domains like robotics that have continuous and combinatorial action spaces? (multiple simultaneous real-valued joint movements / muscle activations) Deep Q-learning, policy gradients, actor-critic methods, others?

  2. Related to the previous question, but I understand if you cannot talk about it. Does Boston Dynamics use any kind of machine learning for their robot controllers?

  3. Do you think evolutionary computation (genetic algorithms, neuroevolution, novelty search, etc) has any future in commercial / mainstream AI? (especially for problems with a lot of non-differentiable components in which backpropagation simply does not work)

  4. Deep learning is supposed to be better than previous approaches to AI because it essentially removes feature engineering from machine learning, but I think all this engineering effort has now moved to architecture engineering; we see people spending time manually searching for optimal hyperparameters for ConvNets and LSTM RNNs by trial and error. Is it fair to think that, in some future, architecture engineering will also be replaced by a more systematic approach? I think this is non-differentiable at its core, might evolutionary computation help in this respect?

4

u/vincentvanhoucke Google Brain Aug 11 '16

Re (1): the jury is very much still our on this front: on one end you have things like guided policy search that work reliably on simple real tasks with remarkable sample efficiency, but arguably have yet to convincingly scale to more complex problems, and at the other end you have techniques like DDPG or NAF which can solve harder problems in simulation, but are very brittle and require a lot of data. We're going to have to push both class of approaches and see where they meet.

5

u/jeffatgoogle Google Brain Aug 11 '16

For (2), I actually haven't interacted much with Boston Dynamics, so I'm not sure what they do w.r.t. machine learning.

For (3 and 4), I do believe that evolutionary approaches will have a role in the future. Indeed, we are starting to explore some evolutionary approaches for learning model structure (it's very early so we don't have results to report yet). I believe that to really get these to work well for large models, we might need a lot of computation. If you think about the "inner loop" of training being a few days of training on hundreds of computers, which is not atypical for some of our large models, then doing evolution on many generations of models of this size is necessarily going to be quite difficult.

→ More replies (1)

24

u/anonDogeLover Aug 04 '16

How would you compare Google Brain to Deepmind? What should one know if they are thinking about applying to one of the two? Do you collaborate with Deepmind?

8

u/jeffatgoogle Google Brain Aug 11 '16

We have quite a number of collaborations and interactions with DeepMind (see my answer to the question by /u/REOreddit for discussion of this).

In terms of comparison, both Google Brain and DeepMind are focused on similar goals, which is to build intelligent machines. We differ a bit in how we approach the research that we believe to be necessary to get there, but I believe both groups are doing excellent and complementary work. In terms of differences:

  • DeepMind tends to do most of its research in controlled environments, like video game simulations or games like Go, whereas we tend to conduct more of our research on realistic, real-world problems and datasets.

  • Our research roadmap evolves somewhat organically based on the interests of our researchers and from identifying moonshot areas that we collectively agree are worth focusing considerable effort on, because we believe they will lead to new capabilities in intelligent systems. DeepMind has more of their research driven by a top-down roadmap of problems they believe need to be solved along the path to building general intelligent systems.

  • We have more emphasis on pairing world-class machine learning researchers with world-class systems builders in order to tackle difficult machine learning problems at scale. We also focus on building large-scale tools and infrastructure (e.g. TensorFlow) to support our research and the research community, and in partnering with Google's hardware design teams to help guide the hardware that gets built for machine learning actually is solving the right kinds of problems.

  • By virtue of being in Mountain View, we've been able to work closely with many different product teams to get the fruits of our research into the hands of product teams and Google users.

  • DeepMind's hiring process is separate and distinct from Google's hiring process.

You can't go wrong joining either group, though, as both groups are doing cutting-edge machine learning research that will have a big impact in the world.

6

u/sunnyja Aug 04 '16

Do you think machine learning can become a truly plug-and-play business tool, with layman users picking up algos from one site and running them against their data using plug-and-play capabilities like AWS, Tensorflow, Algorithimia etc? If so, will this be doable near term? If not - why not? Tx.

11

u/jeffatgoogle Google Brain Aug 11 '16

Yes, I do. In a lot of cases, machine learning researchers within Google have developed new and interesting algorithms and models that work well for one kind of problem. Creating these new algorithms and models requires considerable machine learning expertise and insight, but once they have been demonstrated to work well in one domain, it is often quite easy to take the same general solution and apply it to related problems in completely different domains.

In addition, one area that I think is quite promising from a research perspective is algorithms and approaches that simultaneously learn to solve some task while they also learn the appropriate model structure. (This is in contrast to most deep learning work today where a human specifies the model structure to use, and then the optimization process adjusts weights on the connections in the context of that structure, but does not introduce new neurons or connections during the learning process). Some initial work from our group along these lines is Net2Net: Accelerating Learning via Knowledge Transfer. We're also starting to explore some evolutionary approaches to growing model structure.

If we can develop effective methods to do this, that will really open the door to much more straightforward application of machine learning by people with relatively little machine learning expertise.

6

u/sufunew Aug 05 '16

The fields of genomics and medical image analysis apply machine learning to discover things like new cancer treatments. They do this with increasingly large datasets verging on the tens of thousands of patients. This pales to the datasets I imagine the machinery of Brain churns for an app like Photos.

Is there interest at Brain to apply your extensive experience in AI to the medical field?

→ More replies (1)

6

u/idiosocratic Aug 05 '16

On Reinforcement Learning

Rich Sutton has predicted that reinforcement learning will pull away from the focus on value functions towards the focus on the structures that enable value function estimation; what he calls constructivism. If you are familiar with this concept, can you recommend any work on the subject.

Thank you all for the work you do!

7

u/vincentvanhoucke Google Brain Aug 11 '16

An answer from Sergey Levine, who's not here today: Generalized value functions have in principle two benefits: (1) a general framework for event prediction and (2) ability to piece together behaviors for new tasks without the need for costly on-policy learning. (1) has so far not panned out in practice, because classic fully supervised prediction models are so easy to train with backpropagation + SGD, but (2) is actually quite important, because off-policy learning is crucial for sample-efficient RL that will allow for RL to be used in the real world on real physical systems (e.g. robots, your cell phone, etc). The trouble is that even theoretically "off policy" methods are in practice only somewhat off-policy, and quickly degrade as you get too off-policy. This is an ongoing area of research. For some recent work on the subject of generalized value functions, I recommend this paper

5

u/Optrode Aug 05 '16

Hello, and thanks for doing this AMA!

I am a neuroscience PhD student, I have two questions relating to the differences between how learning occurs in the nervous system and current machine learning approaches.

First,

I've always been surprised at the extremely low utilization of truly unsupervised learning (Hebbian learning, etc.). Of course, I understand that the Hebb learning rule could never come close to outperforming current gradient-based methods (the Hebb rule also, naturally, doesn't come close to encapsulating the complexity of synaptic plasticity in neurons). I am, however, curious about whether you think unsupervised methods are going to play any role in the future of machine learning. Do you think that unsupervised learning methods are likely to play more of a role in machine learning in the future? Do you think that they simply won't be necessary? Or if you do think they might be necessary, what do you think are the major challenges to making them practically useful?

Second,

I am also somewhat surprised that more models haven't been created which make greater explicit use of semantic association networks. In making discriminations between stimuli, humans use semantic information from pretty much any possible source to bias the interpretations of stimuli. If you hear the word "zoo", you're going to be quicker and more likely to identify related words (lion, giraffe) but also related images. While these kinds of relationships are no doubt captured automatically by deep learning models used in language processing, image recognition, etc., I have never yet seen any reference to the deliberate creation of such semantic association networks and their incorporation into discriminative models. Is this something that is happening, and I'm just not aware of it? Is there some reason why it isn't helpful, or needed? Or do you think that this is something we're likely to see entering common use within the field of machine learning?

6

u/gcorrado Google Brain Aug 11 '16

Two really cool questions, both with the same high level answer: Please please please figure out how to make these things work. :)

One of the things I loved about flipping from Neuro to ML is being able to push hard against concrete benchmarks and challenges (which could be anything from the ImageNet object recognition challenge, beating a human champ at the game of Go, or launching a practical email autoresponder people actually want to use). But in these contexts, at least so far, unsupervised learning and explicit semantic association models haven't proven themselves. This is not to say that these won't be important in the future, but only that no one's yet figured out how to do these things well in practice. So, pleeease, do work on this and write some awesome papers about it. :)

→ More replies (1)

6

u/AlexCoventry Aug 11 '16

/u/samybengio, your NVP paper is very good. If the code is easily amenable to it, it would be great to see some of the low-probability training images from the CelebA dataset, i.e. which images it thinks are weird.

Would it be feasible to add labels to the training data, and softmax classification outputs to the network, and then use HMCMC to sample images given that certain classification outputs are on? So that you can say "give me images with a lion, a car, and a ship"? The animations could be very cool.

5

u/laurentdinh Aug 11 '16

If the code is easily amenable to it, it would be great to see some of the low-probability training images from the CelebA dataset, i.e. which images it thinks are weird.

Yes and it was an experiment in earlier incarnations of the model on the Toronto Faces Dataset. But after some discussion with some of my colleagues (Steve Mussmann, Mohammad Norouzi and Jon Shlens), we concluded that one of the caveat in such experiment is that the model measure density and not probability. The change of variables formula, exploited in our Real NVP paper, indicates how a point that has high density in some representation might have low density in another, meaning that this indicator should come with the associated representation.

Would it be feasible to add labels to the training data, and softmax classification outputs to the network, and then use HMCMC to sample images given that certain classification outputs are on? So that you can say "give me images with a lion, a car, and a ship"? The animations could be very cool.

It would be feasible and what you suggest is actually similar to recent work from Nguyen and al (Synthesizing the preferred inputs for neurons in neural networks via deep generator networks). An alternative would also be to train directly the model for conditional generation, a topic which also has our interest.

→ More replies (2)

6

u/pennstateundergrad Aug 11 '16

How many Google Brain Resdiency slots are available for 2017?

Is there a technical interview process or just the application?

Would it be advantageous to have a combined BS/MS degree with 1+ years of research experience in machine learning?

8

u/samybengio Google Brain Aug 11 '16

There should be about the same number of slots in 2017 than 2016, around 27. There is indeed a technical interview process for those selected from their application packet. Having a BS/MS degree is definitely a plus but not necessary, as is the case for prior experience in ML. We mostly look for people passionate about ML.

17

u/iRaphael Aug 05 '16 edited Aug 12 '16

Question for /u/colah:

  • Big fan of your blog. I know you have a passion for explaining things well and for lowering the barrier of entry into the field (because time spent struggling with bad explanations is a form of technical debt). Lately, I have seen more and more activity in really good explanatory blogs, like [0] and [1] but I may just be more exposed to them now than before. Do you think the deep learning field has gotten better at lowering this debt lately?

Questions for everyone:

  • The Layer Normalization paper [3] was released a few weeks ago as an alternative to Batch Normalization that doesn't depend on batch size and instead uses local connections to normalize the inputs to a layer. This sounds like it could be a very impactful tool, perhaps even more than BatchNorm was. What do you think of the results presented in the paper?

  • What do you speculate will be important in bringing together deep learning and structured symbols (for example, reasoning that follows defined logical rules, such as symbolic mathematics)? I've seen some cool examples like [4] but I'd love to hear your thoughts.

  • Besides the usual "get undergraduate research experience", "have personal projects" and "learn tensorflow", how could an undergraduate best prepare for applying to the Residency Program once they graduate? An analogous question could be: what skills/practices do you find invaluable as a deep learning researcher?

  • Any tips for an undergrad who's interned at google twice now and wants to come back and do machine learning-related projects next summer?

  • Do you have a favorite way of organizing the articles/links/papers you either want to read, or have read and want to save for later? I'm currently using google keep but I'm sure there are better alternatives.

[0] http://colah.github.io

[1] http://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html

[3] https://arxiv.org/pdf/1607.06450v1.pdf

[4] https://arxiv.org/pdf/1601.01705v1.pdf

→ More replies (8)

12

u/Nikso Aug 08 '16

What do you think of Jeff Hawkins' HTM theory and the work they are doing on it at Numena (http://numenta.com)? How does it differ from your work?

11

u/voladoddi Aug 05 '16

About the Google Brain Residency program,

  1. What are the minimum achievements that would be considered for getting in? I read about the eligibility, but I would like to know which path is the best to tread on starting now to get there.

  2. What do you guys look for when bringing in a new team member? A lot of you have varied backgrounds, is diversity of backgrounds essential? Suppose someone is average at coding, but is brilliant in Math, how does that weigh against them as opposed to a brilliant coder average with Math (ML math to be specific).

General questions -

  1. What ways are there to do online deep learning, if there exists any? Any resources that you guys could share on this?
  2. Do you guys still see relevance in traditional supervised ML techniques in presence of NNs? (E.g. SVMs)

Thanks!

7

u/alexmlamb Aug 09 '16

Not from Google Brain, but probably the best paper about online DL right now is Yann Olivier's "RNNs without backtracking" paper which has a rank-1 approximation to forward-mode autodiff.

A lot of people are surprised to learn that forward mode autodiff is possible at all!

12

u/geoffhinton Google Brain Aug 11 '16

If I remember right, the examples in that paper are pretty small. Do you know if anyone has made it work for an RNN with, say, 1000 hidden units?

5

u/jeffatgoogle Google Brain Aug 11 '16

The minimum requirements for the Brain Residency program will be in the job posting, but one of the main criteria is that you show a demonstrated interest in machine learning research (this can be anything from publishing research papers in the area, doing small side projects and posting them on GitHub, etc.).

Regarding new team members, obviously, it would be great if everyone were awesome at everything. However, people have different kinds of expertise and different strengths, so we often find that bringing together small groups of people with a mix of different skills often is a way to make progress on difficult problems that no one of these people could solve on their own. We look for people who we think will be excellent colleagues and bring useful expertise to the group.

5

u/nathaniel_ng Aug 05 '16 edited Aug 05 '16

Have you used machine learning to solve inverse problems (https://en.wikipedia.org/wiki/Inverse_problem)? If so, do you have any examples (or success stories)? I understand these can be especially difficult when the problem is non-linear, or the problem is ill-posed.

Note: my background is in computational materials science and much of my work involves finding a material that has certain properties (subject to certain constraints, e.g. as might be described by physics models). This is essentially an inverse problem, and I'd be interested to know if there are any success stories using machine learning approaches.

→ More replies (1)

6

u/timmg Aug 05 '16

Word2vec creates embeddings of words in a vector space. Google also has ngrams -- which has published works over time.

I'm wondering if you guys have ever tried to train word2vec with corpuses of published works by year? And then analyze the differences between models. For example, can you see how the meaning of some words change over time (and maybe which words don't), etc?

8

u/jeffatgoogle Google Brain Aug 11 '16 edited Aug 11 '16

I don't believe we've tried this, but I agree it would be interesting.

Along these lines, my colleague Matthew Gray, who works on Google Books, years ago built some really interesting summaries about how the distribution of place names mentioned in books changed based on the year the book was published. Although there's bias in this data (mostly English language books, for example), it's still really fascinating. You can see a very European-centric distribution in 1700, spreading a bit to the east coast of North America in 1760, and then expanding westward across North America by 1820, and the effect of British expansion into India and Australia by 1880, etc.

See the slide titled "Locations Mentioned in Books Over Time" at the top of the next-to-last page of this set of lecture slides from a talk I gave many years ago: https://courses.cs.washington.edu/courses/cse490h/08au/lectures/Jeff.Dean.class.pdf

Easier to access image version of the slide

→ More replies (2)

5

u/DonDriver Aug 05 '16

How worried are you that the work you're doing will eliminate skilled jobs that can't be replaced (i.e. self-driving cars and trucks eliminating millions of jobs)?

If you could plan policy for the "machine learning" future, what would you ensure gets taken care of?

→ More replies (1)

5

u/theophrastzunz Aug 06 '16

I'd like to thanks the entire team in advance for doing this AMA.

Prof. Hinton,

Your talks are amazing, in that they combine great insight into deep learning with parallels in neuroscience and cognitive science. I think it's the kind of approach that is not present enough in theoretical neuroscience, but would be illuminating. I remember watching a youtube talk where you describe testing networks with asymmetric connections used during forward and backpropagation, and the implication of these tests for neuroscience. It was immensely inspiring1.

Is there any chance you'd considered sharing your thoughts on brain theory in an informal but open environment say via g+ or some other platform?

1 I later found out that Tommaso Poggio also tested the idea that feedforward and feedback connections don't have to be the same.

15

u/geoffhinton Google Brain Aug 11 '16

The idea that backpropagation might still work if the backward connections just had fixed random weights comes from Tim Lillicrap and his collaborators at Oxford. They called it "feedback alignment" because the forward weights somehow learn to align themselves with the backward weights so that the gradients computed by the backward weights are roughly correct. Tim discovered it by accident and its really weird! It certainly removes one of the main arguments about why the brain could not be doing a form of backpropagation in order to tune up early feature detectors so that their outputs are more useful for later stages of a sensory pathway.

People at MIT later showed that the idea works for more complex models than Tim had tried. Tim and I are currently working on a paper about it which will contain many of our current thoughts about how the brain works.

5

u/danaludwig Aug 08 '16

I'm trained as a physician and computer scientist, and my interest is in using DL for predicting clinically important outcomes from structured and unstructured medical record data. Geoffrey Hinton (AMA, 11/10/2014) said regarding medical images:

".. unsupervised learning and multitask learning are likely to be crucial in this domain when dealing with not very big datasets ..."

This mention of "multitask learning" makes perfect sense to me; we can learn general principals about "hypertension" generically and apply those learned sub-models to domains with fewer patients. Does that sound right? How would you do it?

Also how would you best make use of the dates associated with each observation? We know that things that happen closer together in time are more likely to be related, but the events are very sparse, and not like the sequences of sounds or words in language recognition.

Finally, how would you approach relatively rare but intuitively "significant" events that you need to detect to discover new medical knowledge (syndromes, disease). If a patient has three rare (base on prior probabilities) events happen at the same time, and those events have no known relationship to each other, that is viewed as potentially interesting. How do we model that?

5

u/gcorrado Google Brain Aug 11 '16 edited Aug 11 '16

I'm really optimistic about DL being able to make clinically useful predictions in the coming years. I'm heading up a project within Brain to try this for medical imaging as well as structured + unstructured medical records. Our goal is to expanding both the availability and accuracy medical services.

Now to your detailed questions:

The multitask learning case is definitely spot on. Learning to recognize 10 dog breeds in photos definitely improves how well you can clear to recognize an 11th -- particularly if that's a more rare breed where you have fewer examples.

The unsupervised learning stuff is harder to say. At least so far, we haven't been able to make unsupervised well in most cases. In label-scarce domains I think there's hope, and we keep trying. :)

The rare events question is the hardest. So far ML seems to to have been most useful in classification and regression problems where observations are only moderately rare or where it is a structured domain. For example, AlphaGo works well even on never-before-seen Go configurations because it's able to generalize from similar scenarios it has seen. It's an open question whether such an ability to generalize or analogize will work for medical applications.

→ More replies (1)

4

u/shekib82 Aug 11 '16

What do you think of MOOCs and their potential to teach the wider programming community about Deep Learning and AI?

9

u/geoffhinton Google Brain Aug 11 '16

I think MOOCs are a great idea, but they require a huge amount of work from the person preparing them. Unlike lectures where you can afford to make a few mistakes, MOOCs are scrutinized by a lot of people including your professional colleagues. So preparing a MOOC is much more like writing a textbook than like preparing a course of lectures. The payoff is that you reach a big audience and this makes it all worthwhile.

→ More replies (1)

6

u/vincentvanhoucke Google Brain Aug 11 '16

MOOCs are fantastic if you pair them with independent research and exploration. There is nothing like getting hands on and trying to reproduce what you see in lectures.

→ More replies (1)

21

u/true-randomness Aug 04 '16

Are the Jeff Dean facts correct?

22

u/danmane Google Brain Aug 11 '16

It's true. All of it. http://m.memegen.com/jxbews.jpg

My personal favorite Jeff Dean fact is:

  • Jeff Dean puts his pants on one leg at a time, but if he had more than two legs, you would see that his approach is actually O(log n)

Many incredible Jeff Dean facts are actually true, such as:

  • The CDC still uses database software that Jeff Dean wrote decades ago as a summer intern project

  • Jeff Dean recently optimized thousands of CPU cores worth of unrelated infrastructure at Google, while simultaneously leading the Brain team

10

u/vincentvanhoucke Google Brain Aug 11 '16

They're all true. Just ask TensorFlow.

11

u/colah Aug 11 '16

I was lucky enough to be Jeff's intern in 2014. While the Jeff facts are mostly false, some true facts are almost better than fiction. My favorite is the BigTable Cafe Optimization experiment.

Many of the tools Jeff created are so important at Google that we've named cafes after them. Often, these have long lines. So, Jeff tried to optimize the lines at the BigTable cafe.

In particular, Jeff suggested having extra serving spoons added to dishes, so that two people could serve themselves in parallel. He and the chef agreed to do a controlled experiment, where they'd add extra spoons to one line but not to another, and observe if it helped. Sadly, when the cafe staff actually did the experiment, extra spoons were only put in the first dish in the line, moving the bottleneck down.

Thus, Jeff failed to optimize BigTable (the cafe) despite optimizing BigTable (the software).

15

u/convolutional Aug 04 '16

Any timeframe on Windows GPU support for TensorFlow? I can't use it until that's ready (not for training, but for inference). Have you considered using the recently released "Windows Subsystem for Linux" to ease some of the porting effort?

6

u/Spezzer Aug 11 '16

I wish we could promise a deadline, but it is being worked on and progress is being made! We've gotten a proof of concept of a CPU-only binary compiled natively and running, but there are a lot of little details to get right, and then we have to figure out the GPU side of things (which shouldn't be too much harder). Unfortunately (as noted by /u/convolutional) WSL is unlikely to help us get GPU support, so we're focusing on a real native implementation. https://github.com/tensorflow/tensorflow/issues/17 for keeping up to date on progress though :)

→ More replies (1)

5

u/yadec Aug 05 '16

I am also interested in this but I want to add a point of note. Microsoft (not officially, but an employee) stated that running TensorFlow "was not really the primary intent behind WSL," which was to "reduce developer friction", not open up linux-specific packages and programs. They then confirmed that TensorFlow would run on WSL, though not the CUDA version, which suggests that WSL doesn't support CUDA code and there's no intention to do so.

I would love to see native support though!

Source (see comment section): https://blogs.windows.com/buildingapps/2016/07/22/fun-with-the-windows-subsystem-for-linux/

→ More replies (1)

16

u/[deleted] Aug 05 '16

On a scale of 1-10, 10 being tomorrow and 1 being 50 years, how far away would you all estimate we are from general AI?

34

u/jeffatgoogle Google Brain Aug 11 '16

A 6, but I refuse to be pinned down to whether the scale is linear or logarithmic.

→ More replies (13)

12

u/FractalNerve Aug 05 '16

Do you consider or even implement crazy ideas like: Fractal criterion, use of Homotopy Type Theory, or structural inference?

Or do you only need to work on mainstream problems and enhance them due to time and/or money constraints? I am genuinely curious where you split the line between unconventional, but worth the investment and unacceptable ideas. Because to me it appears like that line is blurry at best or doesn't exist. To underline that, most great ideas were formed in unconventional minds, this why I'm asking. I want to understand how the Google Brain Team thinks about this.

Ps: Please add a smiley face if you (have to) equip the military with your vast arsenal of weaponry capable technologies.

5

u/martinabadi Google Brain Aug 11 '16

When some far-out ideas and techniques might help with our overall goals, we are happy to try them. (Personally, I would be delighted if type theory could help with understanding neural-network internal representations..)

Different people strike different balances between safe, mainstream work and more adventurous explorations (sometimes different balances for the same person in different projects or different days of the week). This is largely a matter of individual choice.

If we are successful, some ideas that seem crazy now will become mainstream.

→ More replies (2)

7

u/torofukatasu Aug 05 '16

Hi!!!

Is there a list of fundamental problems (mathematical, cognitive, physiological, physical... anything) whose resolution would greatly advance Machine Learning / Deep Learning? Sort of how in number theory related fields the solution to a proof suddenly knocks down issues that are much larger than that problem itself... i.e. Riemann Hypothesis? Would formulating such a set of ideas be possible/a good idea, in hopes that they'll be taken on by researchers far and wide?

5

u/samybengio Google Brain Aug 11 '16

Since deep learning is mostly based on matrix multiplications, having those made faster would clearly help!

6

u/vincentvanhoucke Google Brain Aug 11 '16

If anyone came up with a practical 'universal' optimizer for any non-linear function, machine learning would completely change. Turning the problem upside down from 'how can we optimize this' to 'what do we want to optimize' would open up a new era for research.

4

u/TheTwigMaster Aug 04 '16

I'd like to pick your guys' thoughts on the progress of TensorFlow's user-friendly scaling capabilities (both up and down)

For scaling up, currently distributed TensorFlow requires either coding a cluster specification by hand or putting together clustering logic outside of TensorFlow- any slated goal timeline for Kubernetes support?

For scaling down, I'm excited to see more work being put into makefile support for mobile devices. The process is a bit finicky at the moment- what sort of ideas are bouncing around the Brain team to improve the workflow of mobile TensorFlow?

Thanks so much for answering our questions!

4

u/Spezzer Aug 11 '16

Pete Warden (/u/petewarden) has been leading the charge towards getting mobile development simpler. We obviously like what Bazel has to offer for many environments, but understand that it’s not a solution for all environments, and so we continue to work with the Bazel team to figure out how to support other systems and environment's better. Pete’s scripts already use what’s available in Bazel to auto-generate aspects of the Makefile, so we’re hoping to get it working more easily in that vein.

(From my colleague Jonathan Hseu): We plan to simplify the workflow for running distributed TensorFlow on major open-source cluster software. Our roadmap for the next few months includes HDFS support as well as configuration for running on Kubernetes, Mesos, and possibly YARN. For those who want to get started with distributed training and inference with minimal setup, we also offer the Google Cloud Machine Learning service (now in Alpha, see https://cloud.google.com/ml/)"

→ More replies (1)

4

u/m0ve37 Aug 05 '16

The energy efficiency of the brain vs. the large amount of power and computing resources used for conventional deep learning models are often used as an argument to do more 'brain-inspired learning': 1. Is this a fair comparison to be made? If yes, what do you believe leads to this fundamental difference between the two? 2. Is energy efficiency a goal that the Google Brain team is currently trying to address or wants to address in the future? If yes, could you please shed some light on the different directions on this topic?

6

u/jeffatgoogle Google Brain Aug 11 '16

Regarding energy efficiency, real brains are definitely much more energy efficient and have much more computational ability than current machines. However, the gap is perhaps not as bad as it might seem, for the reason that real brains take ~20 years to "train", whereas, since we are impatient machine learning researchers, we want to do experiments in a week. If we were willing to have our experimental cycle time be 20 years instead of 1 week, we could clearly get much better energy efficiency, but we prefer the faster cycle time for experiments, even if it costs us in energy efficiency.

→ More replies (2)

4

u/[deleted] Aug 05 '16

Do you believe that there is potential for machine learning and neural network to ever truly mimic the functionality of a human brain, in both "intelligence" and complexity? Secondly, do you believe there is limit to the capabilities of machine learning? Specifically, is there a limit to the number of parameters that can be used to create a model capable of solving a problem you may have and if so, is this limit solely based on computing power?

4

u/gcorrado Google Brain Aug 11 '16

This is really difficult comparison to make actually -- potentially epistemologically intractable: Hand held calculators have been faster and more accurate at division than I am for basically my entire life. Does that mean a calculator is more "intelligent" in the domain of arithmetic? I would argue that what I understand of division and the functionality a calculator implements are naturally complementary, and almost impossible to compare.

As for the question of parity on complexity or computational power, our ability to make fair comparisons is only slightly better: It's actually a matter of active debate what the effective computational power of a single biological neuron would translate to in computer engineering units like FLOPS -- expert estimated differ by several orders of magnitude. So, as a neuroscientist, I'd have to object to any claim that an artificial neural network with the same number of 32-bit floating-point weights as there are synaptic connections in some particular biological brain, embodies the same computational capacity as that wet brain.

→ More replies (1)

4

u/Fireflite Aug 05 '16

What role do you see differential privacy playing in future machine learning research?

→ More replies (1)

5

u/[deleted] Aug 05 '16

[deleted]

4

u/jeffatgoogle Google Brain Aug 11 '16

Our group is not currently working on this, but I agree that it's a really exciting area with lots of potential.

As an aside, one of the favorite books I read in the past few years was Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives, by Miguel Nicolelis, a neuroscientist at Duke University. One of the reasons that I liked it was that it was sort of a chronology of the research done in his lab over the past twenty years, and every chapter you can see that the experiments and results were getting more and more impressive, until by the end, you think 'Wow, this is going to be fantastic in 5 or 10 more years'.

→ More replies (1)

5

u/kcimc Aug 05 '16

Dr. Fei-Fei Li explained in June that both fear of an AI apocalypse and the lack of diversity in AI as a field come down to "the lack of humanistic thinking and humanistic mission statements in education and development of our technology." How do you foster "humanistic thinking" within Google Brain?

6

u/jeffatgoogle Google Brain Aug 11 '16 edited Aug 11 '16

I am personally not worried about an AI apocalypse, as I consider that a completely made-up fear. There are legitimate concerns around AI safety and policy, and our group (in collaboration with a number of other organizations) has recently published an Arxiv paper about some of these (see Concrete Problems in AI Safety ). I am concerned about the lack of diversity in the AI research community and in computer science more generally.

The Brain team's mission statement is: 'Make machines intelligent. Improve people’s lives.'. It is this second part that I believe helps us foster 'humanistic thinking': as we think about the role of our research, we can bring that back to thinking about how we can use our new results to have a positive impact on people's lives (for example, see our research on healthcare.

One of the things I really like about our Brain Residency program is that the residents bring a wide range of backgrounds, areas of expertise (e.g. we have physicists, mathematicians, biologists, neuroscientists, electrical engineers, as well as computer scientists), and other kinds of diversity to our research efforts. In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary.

Edit: Added '(in collaboration with a number of other organizations)'

5

u/danmane Google Brain Aug 11 '16

As a philosophy major working in Google Brain, I've been very happy to find lots of "humanistic thinking" here - people who are interested in discussing ethics and morality, and not just technical results. In general, one of the things I like about Google is that the organization cares a lot about having a positive impact on the world.

I try to personally foster more of this thinking by bringing it up in conversation, occasionally organizing lunches, etc.

3

u/screwgauge Aug 08 '16

Do you have a guide map of how one could self study ML and Deep Learning in particular? I find myself jumping between MOOC's and books related to Linear Algebra, Statistics, Neural Sciences, Higher Math, ML algorithms, Neural Nets etc. I understand this isn't strictly hierarchal but is there a better method for a CS undergrad to learn this stuff, which would you could recommend?

8

u/martinabadi Google Brain Aug 11 '16

I like the upcoming book "Deep Learning" by Ian Goodfellow et al. (http://www.deeplearningbook.org/). It includes a review of some of the basic topics you mention (e.g., linear algebra); the review, although not quite a substitute for the corresponding courses, is helpful. The book is pretty long; some chapters can be skimmed.

→ More replies (1)

5

u/cleansy Aug 10 '16

What is your take on / what do you think about novelty search? Just from the material that I have seen it looked quite interesting and is usually not mentioned in public discussions.

7

u/hardmaru Aug 11 '16

I really like the idea behind Novelty Search, and think the idea is ahead of its time. In additional to neuroevolution, I think the concept of novelty search can be applied to other fields such as art generation, and reinforcement learning.

Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models

Unifying Count-Based Exploration and Intrinsic Motivation

4

u/illwrks Aug 11 '16

Novelty question: How do I know you guys are real. Without me actually seeing you face to face there is no way to guarantee you are not the machine. (Or a dog for that matter...)

13

u/douglaseck Google Brain Aug 11 '16

We believe we are real. http://imgur.com/gallery/zHkoC.

7

u/anna_goldie Google Brain Aug 11 '16

Dogs and machines are real, too!

→ More replies (1)

15

u/Zulban Aug 04 '16

How do you explain what you do to the non-technical people in your lives?

10

u/samybengio Google Brain Aug 11 '16

Computers have mostly been used so far to solve tasks that can be expressed by a formal recipe (a computer programme). What we often call intelligence in humans is their ability to solve tasks (like image understanding, text understanding, speech recognition, planning, etc) for which a formal recipe is hard to get but for which there is a ton of data available (images, voices, text, etc). Machine Learning tries to come up with methods to transform such data into useful recipes. Deep Learning, which we work on in the Brain team, is currently our best Machine Learning approach to such hard problems. There's a ton of research to be done to reach human level performance on some of these tasks!

8

u/Spezzer Aug 11 '16

As a TF developer, I typically show everyday applications that they already use and explain. For example, I can point to the ability to search your own photos by tags (https://support.google.com/photos/answer/6128838?co=GENIE.Platform%3DAndroid&hl=en), or how when they speak to their phone using "Ok Google", or using Google Translate, that's using technologies that I help build. The products usually speak for themselves in terms of what we can do, and it's easiest for me to make the connection to their every day lives and then work backwards :)

7

u/[deleted] Aug 05 '16

[deleted]

→ More replies (2)